Nov 22 07:10:35 crc systemd[1]: Starting Kubernetes Kubelet... Nov 22 07:10:35 crc restorecon[4808]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:35 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:36 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:37 crc restorecon[4808]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 22 07:10:39 crc kubenswrapper[4858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:39 crc kubenswrapper[4858]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 22 07:10:39 crc kubenswrapper[4858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:39 crc kubenswrapper[4858]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:39 crc kubenswrapper[4858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 22 07:10:39 crc kubenswrapper[4858]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.164647 4858 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168865 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168891 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168899 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168906 4858 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168914 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168922 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168929 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168936 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168942 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168948 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168955 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168962 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168968 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168975 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168982 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168989 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.168995 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169001 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169008 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169020 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169026 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169036 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169046 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169053 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169060 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169068 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169074 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169081 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169088 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169094 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169101 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169108 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169114 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169121 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169127 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169134 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169142 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169151 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169158 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169165 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169173 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169180 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169187 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169197 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169204 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169211 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169217 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169224 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169230 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169237 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169243 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169249 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169261 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169269 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169277 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169284 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169291 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169299 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169306 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169312 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169351 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169358 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169365 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169371 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169378 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169386 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169392 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169398 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169405 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169412 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.169418 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169574 4858 flags.go:64] FLAG: --address="0.0.0.0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169588 4858 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169600 4858 flags.go:64] FLAG: --anonymous-auth="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169610 4858 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169619 4858 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169627 4858 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169636 4858 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169645 4858 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169652 4858 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169660 4858 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169668 4858 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169676 4858 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169683 4858 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169691 4858 flags.go:64] FLAG: --cgroup-root="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169698 4858 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169706 4858 flags.go:64] FLAG: --client-ca-file="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169713 4858 flags.go:64] FLAG: --cloud-config="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169720 4858 flags.go:64] FLAG: --cloud-provider="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169727 4858 flags.go:64] FLAG: --cluster-dns="[]" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169738 4858 flags.go:64] FLAG: --cluster-domain="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169745 4858 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169753 4858 flags.go:64] FLAG: --config-dir="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169760 4858 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169768 4858 flags.go:64] FLAG: --container-log-max-files="5" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169777 4858 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169784 4858 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169792 4858 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169800 4858 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169807 4858 flags.go:64] FLAG: --contention-profiling="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169814 4858 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169821 4858 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169829 4858 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169835 4858 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169844 4858 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169852 4858 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169860 4858 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169867 4858 flags.go:64] FLAG: --enable-load-reader="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169874 4858 flags.go:64] FLAG: --enable-server="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169881 4858 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169890 4858 flags.go:64] FLAG: --event-burst="100" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169898 4858 flags.go:64] FLAG: --event-qps="50" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169905 4858 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169913 4858 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169920 4858 flags.go:64] FLAG: --eviction-hard="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169928 4858 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169936 4858 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169944 4858 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169951 4858 flags.go:64] FLAG: --eviction-soft="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169959 4858 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169965 4858 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169973 4858 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169980 4858 flags.go:64] FLAG: --experimental-mounter-path="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169987 4858 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.169994 4858 flags.go:64] FLAG: --fail-swap-on="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170001 4858 flags.go:64] FLAG: --feature-gates="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170010 4858 flags.go:64] FLAG: --file-check-frequency="20s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170017 4858 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170024 4858 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170032 4858 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170039 4858 flags.go:64] FLAG: --healthz-port="10248" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170048 4858 flags.go:64] FLAG: --help="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170055 4858 flags.go:64] FLAG: --hostname-override="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170062 4858 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170069 4858 flags.go:64] FLAG: --http-check-frequency="20s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170077 4858 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170084 4858 flags.go:64] FLAG: --image-credential-provider-config="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170091 4858 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170099 4858 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170106 4858 flags.go:64] FLAG: --image-service-endpoint="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170113 4858 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170119 4858 flags.go:64] FLAG: --kube-api-burst="100" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170127 4858 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170135 4858 flags.go:64] FLAG: --kube-api-qps="50" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170141 4858 flags.go:64] FLAG: --kube-reserved="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170149 4858 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170155 4858 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170163 4858 flags.go:64] FLAG: --kubelet-cgroups="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170171 4858 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170178 4858 flags.go:64] FLAG: --lock-file="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170186 4858 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170193 4858 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170201 4858 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170219 4858 flags.go:64] FLAG: --log-json-split-stream="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170226 4858 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170233 4858 flags.go:64] FLAG: --log-text-split-stream="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170240 4858 flags.go:64] FLAG: --logging-format="text" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170248 4858 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170256 4858 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170263 4858 flags.go:64] FLAG: --manifest-url="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170270 4858 flags.go:64] FLAG: --manifest-url-header="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170279 4858 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170286 4858 flags.go:64] FLAG: --max-open-files="1000000" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170304 4858 flags.go:64] FLAG: --max-pods="110" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170332 4858 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170341 4858 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170348 4858 flags.go:64] FLAG: --memory-manager-policy="None" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170355 4858 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170363 4858 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170370 4858 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170379 4858 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170396 4858 flags.go:64] FLAG: --node-status-max-images="50" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170403 4858 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170411 4858 flags.go:64] FLAG: --oom-score-adj="-999" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170419 4858 flags.go:64] FLAG: --pod-cidr="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170426 4858 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170436 4858 flags.go:64] FLAG: --pod-manifest-path="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170443 4858 flags.go:64] FLAG: --pod-max-pids="-1" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170451 4858 flags.go:64] FLAG: --pods-per-core="0" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170458 4858 flags.go:64] FLAG: --port="10250" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170465 4858 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170473 4858 flags.go:64] FLAG: --provider-id="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170481 4858 flags.go:64] FLAG: --qos-reserved="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170488 4858 flags.go:64] FLAG: --read-only-port="10255" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170496 4858 flags.go:64] FLAG: --register-node="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170503 4858 flags.go:64] FLAG: --register-schedulable="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170512 4858 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170524 4858 flags.go:64] FLAG: --registry-burst="10" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170532 4858 flags.go:64] FLAG: --registry-qps="5" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170540 4858 flags.go:64] FLAG: --reserved-cpus="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170547 4858 flags.go:64] FLAG: --reserved-memory="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170557 4858 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170564 4858 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170572 4858 flags.go:64] FLAG: --rotate-certificates="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170579 4858 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170587 4858 flags.go:64] FLAG: --runonce="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170594 4858 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170602 4858 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170610 4858 flags.go:64] FLAG: --seccomp-default="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170618 4858 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170626 4858 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170635 4858 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170643 4858 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170651 4858 flags.go:64] FLAG: --storage-driver-password="root" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170657 4858 flags.go:64] FLAG: --storage-driver-secure="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170663 4858 flags.go:64] FLAG: --storage-driver-table="stats" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170670 4858 flags.go:64] FLAG: --storage-driver-user="root" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170676 4858 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170682 4858 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170689 4858 flags.go:64] FLAG: --system-cgroups="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170695 4858 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170704 4858 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170711 4858 flags.go:64] FLAG: --tls-cert-file="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170717 4858 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170726 4858 flags.go:64] FLAG: --tls-min-version="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170732 4858 flags.go:64] FLAG: --tls-private-key-file="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170738 4858 flags.go:64] FLAG: --topology-manager-policy="none" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170744 4858 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170750 4858 flags.go:64] FLAG: --topology-manager-scope="container" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170756 4858 flags.go:64] FLAG: --v="2" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170765 4858 flags.go:64] FLAG: --version="false" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170773 4858 flags.go:64] FLAG: --vmodule="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170780 4858 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.170787 4858 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170926 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170933 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170940 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170948 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170955 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170960 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170966 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170971 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170977 4858 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170982 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170988 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.170995 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171002 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171008 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171013 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171020 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171026 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171032 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171037 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171043 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171050 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171056 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171062 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171067 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171072 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171078 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171083 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171089 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171094 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171099 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171104 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171109 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171118 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171124 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171130 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171135 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171140 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171146 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171151 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171156 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171161 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171166 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171171 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171176 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171182 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171188 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171194 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171206 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171216 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171223 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171230 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171236 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171244 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171250 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171256 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171263 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171270 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171276 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171282 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171288 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171294 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171301 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171307 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171351 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171366 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171375 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171383 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171390 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171395 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171401 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.171410 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.177730 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.190377 4858 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.190410 4858 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190468 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190475 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190481 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190487 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190491 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190495 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190499 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190503 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190508 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190512 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190515 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190519 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190523 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190526 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190529 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190533 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190537 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190540 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190543 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190547 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190551 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190555 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190558 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190561 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190565 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190568 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190572 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190575 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190579 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190582 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190586 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190590 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190593 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190597 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190602 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190605 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190609 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190612 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190616 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190619 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190623 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190626 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190630 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190633 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190636 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190640 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190643 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190647 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190650 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190654 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190658 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190661 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190665 4858 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190669 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190672 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190675 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190679 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190682 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190686 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190691 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190696 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190700 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190704 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190708 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190714 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190719 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190724 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190728 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190732 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190736 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190740 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.190746 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190855 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190861 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190865 4858 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190870 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190874 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190878 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190882 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190886 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190889 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190894 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190898 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190902 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190906 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190911 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190915 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190918 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190922 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190926 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190930 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190933 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190937 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190940 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190944 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190947 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190951 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190955 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190958 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190963 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190966 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190969 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190973 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190977 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190981 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190985 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190989 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190993 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.190996 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191000 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191003 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191007 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191010 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191014 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191017 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191021 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191024 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191027 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191031 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191035 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191040 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191043 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191047 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191051 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191055 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191059 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191063 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191067 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191072 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191076 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191080 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191084 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191088 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191092 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191096 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191100 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191103 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191107 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191110 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191113 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191117 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191120 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.191124 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.191129 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.191268 4858 server.go:940] "Client rotation is on, will bootstrap in background" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.209040 4858 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.209129 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.219611 4858 server.go:997] "Starting client certificate rotation" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.219658 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.222977 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-18 02:23:49.120643394 +0000 UTC Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.223116 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1363h13m9.897533014s for next certificate rotation Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.262072 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.264562 4858 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.291626 4858 log.go:25] "Validated CRI v1 runtime API" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.375518 4858 log.go:25] "Validated CRI v1 image API" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.377474 4858 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.389070 4858 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-22-07-00-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.389136 4858 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.421872 4858 manager.go:217] Machine: {Timestamp:2025-11-22 07:10:39.416730792 +0000 UTC m=+1.258153878 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:75279d0b-50e9-4469-9fd3-3a3571789513 BootID:8142ece0-65e2-4a75-afd0-f871d9afb049 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:21:6b:c8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:21:6b:c8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e0:99:0a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:74:b1:bd Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:dd:72:f3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9f:52:0d Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:58:dc:c1 Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:9d:95:ed Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:61:29:86:e7:5d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:0b:70:ba:09:3f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.422238 4858 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.422649 4858 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.423407 4858 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.423598 4858 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.423630 4858 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.423848 4858 topology_manager.go:138] "Creating topology manager with none policy" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.423859 4858 container_manager_linux.go:303] "Creating device plugin manager" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.424418 4858 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.426406 4858 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.426713 4858 state_mem.go:36] "Initialized new in-memory state store" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.426827 4858 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.436842 4858 kubelet.go:418] "Attempting to sync node with API server" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.436907 4858 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.436952 4858 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.436972 4858 kubelet.go:324] "Adding apiserver pod source" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.436991 4858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.446678 4858 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.447581 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.450427 4858 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452122 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452153 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452162 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452172 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452186 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452194 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452202 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452217 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452228 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452237 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452249 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452258 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.452280 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.453021 4858 server.go:1280] "Started kubelet" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.454406 4858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 22 07:10:39 crc systemd[1]: Started Kubernetes Kubelet. Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.455872 4858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.457586 4858 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.458582 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.458724 4858 server.go:460] "Adding debug handlers to kubelet server" Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.462862 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.462973 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.463152 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.463232 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.458692 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.463358 4858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.463437 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:18:33.669085139 +0000 UTC Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.463527 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 887h7m54.205568527s for next certificate rotation Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.463597 4858 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.463608 4858 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.463906 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.465426 4858 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.466290 4858 factory.go:55] Registering systemd factory Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.471518 4858 factory.go:221] Registration of the systemd container factory successfully Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.472217 4858 factory.go:153] Registering CRI-O factory Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.472236 4858 factory.go:221] Registration of the crio container factory successfully Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.472581 4858 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.472712 4858 factory.go:103] Registering Raw factory Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.472732 4858 manager.go:1196] Started watching for new ooms in manager Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.475013 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="200ms" Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.475288 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.475395 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.476741 4858 manager.go:319] Starting recovery of all containers Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.475085 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.159:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a429b710c7fe4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:10:39.452979172 +0000 UTC m=+1.294402208,LastTimestamp:2025-11-22 07:10:39.452979172 +0000 UTC m=+1.294402208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486077 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486165 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486179 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486193 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486238 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486253 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486297 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486333 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486349 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486364 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486378 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486391 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486405 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486428 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486508 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486526 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486542 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486555 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486568 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486581 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486594 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486608 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.486651 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491251 4858 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491395 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491417 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491432 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491456 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491472 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491493 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491507 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491521 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491542 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491557 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491572 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491586 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491599 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491614 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491629 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491647 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491660 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491673 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491690 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491702 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491721 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491741 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491759 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491771 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491785 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491799 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491812 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491839 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.491853 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492029 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492048 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492071 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492088 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492137 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492153 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492169 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492181 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492195 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492209 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492223 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492239 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492338 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492353 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492368 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492383 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492396 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492411 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492426 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492440 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492454 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492467 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492482 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492497 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492515 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492528 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492541 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492559 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492573 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492587 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492599 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492613 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492625 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492645 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492658 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492672 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492685 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492697 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492710 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492724 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492737 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492749 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492763 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492778 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492790 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492801 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492815 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492827 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492843 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492857 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492871 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492885 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492907 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492923 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492939 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492953 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492970 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.492984 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493010 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493024 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493039 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493054 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493066 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493079 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493123 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493141 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493160 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493174 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493191 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493203 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493215 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493227 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493241 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493253 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493271 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493285 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493299 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493310 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493341 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493354 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493369 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493380 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493394 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493407 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493417 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493430 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493442 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493455 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493468 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493481 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493493 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493509 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493529 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493542 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493553 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493566 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493582 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493605 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493618 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493639 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493660 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493675 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493687 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493702 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493716 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493736 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493750 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493787 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493801 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493813 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493828 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493842 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493858 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493870 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493883 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493894 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.493982 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494103 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494128 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494147 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494169 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494189 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494207 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494226 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494244 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494260 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494280 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494296 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494343 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494365 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494386 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494407 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494426 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494466 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494487 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494503 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494521 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494538 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494552 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494567 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494583 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494602 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494619 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494635 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494650 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494664 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494682 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494698 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494716 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494736 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494752 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494770 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494794 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494811 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494830 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494907 4858 reconstruct.go:97] "Volume reconstruction finished" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.494918 4858 reconciler.go:26] "Reconciler: start to sync state" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.500197 4858 manager.go:324] Recovery completed Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.517887 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.520990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.521044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.521054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.522294 4858 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.522368 4858 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.522413 4858 state_mem.go:36] "Initialized new in-memory state store" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.531927 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.534342 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.534380 4858 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.534401 4858 kubelet.go:2335] "Starting kubelet main sync loop" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.534557 4858 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 22 07:10:39 crc kubenswrapper[4858]: W1122 07:10:39.535231 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.535309 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.564195 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.635411 4858 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.664697 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.676713 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="400ms" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.746900 4858 policy_none.go:49] "None policy: Start" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.748458 4858 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.748491 4858 state_mem.go:35] "Initializing new in-memory state store" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.764992 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.830281 4858 manager.go:334] "Starting Device Plugin manager" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.830589 4858 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.830610 4858 server.go:79] "Starting device plugin registration server" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.831104 4858 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.831127 4858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.831289 4858 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.831432 4858 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.831440 4858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.835661 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.835792 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.837494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.837554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.837572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.837749 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.838293 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.838367 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839345 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839434 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.839468 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.840882 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.841025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.841092 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.841992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842444 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842596 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.842621 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843282 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843305 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.843737 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.843996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.900860 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.900916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.900942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.900961 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.900978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901011 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901026 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901041 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901122 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901202 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.901231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.931458 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.933355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.933389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.933401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4858]: I1122 07:10:39.933426 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:39 crc kubenswrapper[4858]: E1122 07:10:39.933955 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002473 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002601 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002644 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002688 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002975 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003044 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.003279 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.002951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.078004 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="800ms" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.134933 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.136587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.136630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.136639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.136666 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.137090 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.168936 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.175156 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.205854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.224918 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.233296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.349816 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7e8417a5ca6a59b3e2c81fdd7a8834579be492641d8df623ab1a46e92510e3eb WatchSource:0}: Error finding container 7e8417a5ca6a59b3e2c81fdd7a8834579be492641d8df623ab1a46e92510e3eb: Status 404 returned error can't find the container with id 7e8417a5ca6a59b3e2c81fdd7a8834579be492641d8df623ab1a46e92510e3eb Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.351891 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-42d39b0848904fd67f5820a136839f06f4f1c53ce635fe21b1cbdadb19e50f6d WatchSource:0}: Error finding container 42d39b0848904fd67f5820a136839f06f4f1c53ce635fe21b1cbdadb19e50f6d: Status 404 returned error can't find the container with id 42d39b0848904fd67f5820a136839f06f4f1c53ce635fe21b1cbdadb19e50f6d Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.355738 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-aa0e4dac733ce3d4c5e779e30a849fadddbd27e49862265c4a83c7e666ebd25f WatchSource:0}: Error finding container aa0e4dac733ce3d4c5e779e30a849fadddbd27e49862265c4a83c7e666ebd25f: Status 404 returned error can't find the container with id aa0e4dac733ce3d4c5e779e30a849fadddbd27e49862265c4a83c7e666ebd25f Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.356109 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-84191cea0306d0df0dff46b245951fd78a69653722be2583dc62ef78c4aa03b0 WatchSource:0}: Error finding container 84191cea0306d0df0dff46b245951fd78a69653722be2583dc62ef78c4aa03b0: Status 404 returned error can't find the container with id 84191cea0306d0df0dff46b245951fd78a69653722be2583dc62ef78c4aa03b0 Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.357550 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-806f190418d23f88cc9ceea6d919a100f3dfa235f74ed7de86bff6b759810f8c WatchSource:0}: Error finding container 806f190418d23f88cc9ceea6d919a100f3dfa235f74ed7de86bff6b759810f8c: Status 404 returned error can't find the container with id 806f190418d23f88cc9ceea6d919a100f3dfa235f74ed7de86bff6b759810f8c Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.365194 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.365310 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.459522 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.519585 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.519702 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.537473 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.539134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.539180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.539198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.539235 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.539814 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.539977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa0e4dac733ce3d4c5e779e30a849fadddbd27e49862265c4a83c7e666ebd25f"} Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.541825 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84191cea0306d0df0dff46b245951fd78a69653722be2583dc62ef78c4aa03b0"} Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.543548 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"42d39b0848904fd67f5820a136839f06f4f1c53ce635fe21b1cbdadb19e50f6d"} Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.545131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7e8417a5ca6a59b3e2c81fdd7a8834579be492641d8df623ab1a46e92510e3eb"} Nov 22 07:10:40 crc kubenswrapper[4858]: I1122 07:10:40.546300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"806f190418d23f88cc9ceea6d919a100f3dfa235f74ed7de86bff6b759810f8c"} Nov 22 07:10:40 crc kubenswrapper[4858]: W1122 07:10:40.642077 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.642209 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:40 crc kubenswrapper[4858]: E1122 07:10:40.879393 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="1.6s" Nov 22 07:10:41 crc kubenswrapper[4858]: W1122 07:10:41.067686 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:41 crc kubenswrapper[4858]: E1122 07:10:41.067764 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:41 crc kubenswrapper[4858]: I1122 07:10:41.340391 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:41 crc kubenswrapper[4858]: I1122 07:10:41.341883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:41 crc kubenswrapper[4858]: I1122 07:10:41.342215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:41 crc kubenswrapper[4858]: I1122 07:10:41.342227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:41 crc kubenswrapper[4858]: I1122 07:10:41.342253 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:41 crc kubenswrapper[4858]: E1122 07:10:41.342784 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:41 crc kubenswrapper[4858]: I1122 07:10:41.460078 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:42 crc kubenswrapper[4858]: I1122 07:10:42.460257 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:42 crc kubenswrapper[4858]: E1122 07:10:42.480596 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="3.2s" Nov 22 07:10:42 crc kubenswrapper[4858]: W1122 07:10:42.909516 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:42 crc kubenswrapper[4858]: E1122 07:10:42.909569 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:42 crc kubenswrapper[4858]: I1122 07:10:42.943399 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:42 crc kubenswrapper[4858]: I1122 07:10:42.946694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:42 crc kubenswrapper[4858]: I1122 07:10:42.946758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:42 crc kubenswrapper[4858]: I1122 07:10:42.946774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:42 crc kubenswrapper[4858]: I1122 07:10:42.946990 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:42 crc kubenswrapper[4858]: E1122 07:10:42.949213 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:43 crc kubenswrapper[4858]: E1122 07:10:43.089147 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.159:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a429b710c7fe4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:10:39.452979172 +0000 UTC m=+1.294402208,LastTimestamp:2025-11-22 07:10:39.452979172 +0000 UTC m=+1.294402208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:10:43 crc kubenswrapper[4858]: W1122 07:10:43.134137 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:43 crc kubenswrapper[4858]: E1122 07:10:43.134226 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:43 crc kubenswrapper[4858]: I1122 07:10:43.459912 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:43 crc kubenswrapper[4858]: W1122 07:10:43.512812 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:43 crc kubenswrapper[4858]: E1122 07:10:43.512918 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:43 crc kubenswrapper[4858]: I1122 07:10:43.556295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45"} Nov 22 07:10:43 crc kubenswrapper[4858]: I1122 07:10:43.557941 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9"} Nov 22 07:10:43 crc kubenswrapper[4858]: I1122 07:10:43.559810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa"} Nov 22 07:10:43 crc kubenswrapper[4858]: I1122 07:10:43.561401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24"} Nov 22 07:10:43 crc kubenswrapper[4858]: I1122 07:10:43.562746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07"} Nov 22 07:10:43 crc kubenswrapper[4858]: W1122 07:10:43.754426 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:43 crc kubenswrapper[4858]: E1122 07:10:43.754509 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.460091 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.569947 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07" exitCode=0 Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.570045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07"} Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.570082 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.571721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.571792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.571820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.573747 4858 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45" exitCode=0 Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.573841 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45"} Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.573932 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.575468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.575528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.575541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.577228 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa"} Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.582915 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa" exitCode=0 Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.583020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa"} Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.583035 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.584830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.584901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.584922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.586589 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24" exitCode=0 Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.586630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24"} Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.586740 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.588650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.588680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.588690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.590827 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.591881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.591913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:44 crc kubenswrapper[4858]: I1122 07:10:44.591922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:45 crc kubenswrapper[4858]: I1122 07:10:45.460085 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:45 crc kubenswrapper[4858]: I1122 07:10:45.591835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80"} Nov 22 07:10:45 crc kubenswrapper[4858]: E1122 07:10:45.681680 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="6.4s" Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.149959 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.152006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.152070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.152098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.152140 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:46 crc kubenswrapper[4858]: E1122 07:10:46.152882 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.460356 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.598924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c"} Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.604214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955"} Nov 22 07:10:46 crc kubenswrapper[4858]: I1122 07:10:46.606365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2"} Nov 22 07:10:47 crc kubenswrapper[4858]: W1122 07:10:47.278126 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:47 crc kubenswrapper[4858]: E1122 07:10:47.278248 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.459645 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.610168 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e" exitCode=0 Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.610293 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.610827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e"} Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.611129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.611156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:47 crc kubenswrapper[4858]: I1122 07:10:47.611168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:48 crc kubenswrapper[4858]: W1122 07:10:48.117373 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:48 crc kubenswrapper[4858]: E1122 07:10:48.117509 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:48 crc kubenswrapper[4858]: W1122 07:10:48.204831 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:48 crc kubenswrapper[4858]: E1122 07:10:48.204944 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:48 crc kubenswrapper[4858]: I1122 07:10:48.459753 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:48 crc kubenswrapper[4858]: I1122 07:10:48.612111 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:48 crc kubenswrapper[4858]: I1122 07:10:48.612956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:48 crc kubenswrapper[4858]: I1122 07:10:48.612984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:48 crc kubenswrapper[4858]: I1122 07:10:48.612993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:48 crc kubenswrapper[4858]: W1122 07:10:48.874213 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:48 crc kubenswrapper[4858]: E1122 07:10:48.874301 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:49 crc kubenswrapper[4858]: I1122 07:10:49.460351 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:49 crc kubenswrapper[4858]: I1122 07:10:49.617987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c"} Nov 22 07:10:49 crc kubenswrapper[4858]: E1122 07:10:49.843909 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:10:50 crc kubenswrapper[4858]: I1122 07:10:50.459507 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.459848 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.626299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae"} Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.626492 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.627471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.627508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.627520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.630486 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2"} Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.633640 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.633816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde"} Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.634531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.634563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:51 crc kubenswrapper[4858]: I1122 07:10:51.634574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:52 crc kubenswrapper[4858]: E1122 07:10:52.082866 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.159:6443: connect: connection refused" interval="7s" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.166166 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.166240 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.166290 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.459836 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.553048 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.554944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.555002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.555014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.555051 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:52 crc kubenswrapper[4858]: E1122 07:10:52.555871 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.159:6443: connect: connection refused" node="crc" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.639886 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae" exitCode=0 Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.639946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae"} Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.640028 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.640261 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.641031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.641101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.641123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.641831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.641869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.641882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:52 crc kubenswrapper[4858]: I1122 07:10:52.818496 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:53 crc kubenswrapper[4858]: E1122 07:10:53.091147 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.159:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a429b710c7fe4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:10:39.452979172 +0000 UTC m=+1.294402208,LastTimestamp:2025-11-22 07:10:39.452979172 +0000 UTC m=+1.294402208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.459554 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.644856 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce"} Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.647149 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.647707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852"} Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.648002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.648031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:53 crc kubenswrapper[4858]: I1122 07:10:53.648040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.460026 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.588408 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.654038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd42a832eecf35db34ca68f9e8358a9cd1825d114ddfb090d8c80a9d4651e5af"} Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.654102 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.654171 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.655417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.655450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.655459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.655718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.655789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.655807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:54 crc kubenswrapper[4858]: I1122 07:10:54.811838 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.459950 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.656237 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.656440 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.657465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.657504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.657513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:55 crc kubenswrapper[4858]: I1122 07:10:55.744938 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.460192 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.661655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52"} Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.664298 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"21cf2f14944b35d986f700de555e6abac2f645c43ec6a12789f665d33f2a5a1a"} Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.664411 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.665478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.665505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:56 crc kubenswrapper[4858]: I1122 07:10:56.665516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:57 crc kubenswrapper[4858]: W1122 07:10:57.040314 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:57 crc kubenswrapper[4858]: E1122 07:10:57.040675 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:57 crc kubenswrapper[4858]: W1122 07:10:57.223993 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:57 crc kubenswrapper[4858]: E1122 07:10:57.224076 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.459978 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.671081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951"} Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.671211 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.672608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.672717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.672737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.675312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"530cdf70dba751b5fc1820866197efb4202df3dbf7a90fc8ff81fc943fe74f27"} Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.675386 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.679824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.679921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:57 crc kubenswrapper[4858]: I1122 07:10:57.680002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:57 crc kubenswrapper[4858]: W1122 07:10:57.749524 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.159:6443: connect: connection refused Nov 22 07:10:57 crc kubenswrapper[4858]: E1122 07:10:57.749624 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.159:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.102861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.683217 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.683243 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.683501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0102408edc3d74c5e35bfd93b50aba129374e757f58bce310661f730e4b51750"} Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.683554 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"26359fa9cd09d63015438732ecba7b4c5271f1103ee19fb63dfa857e03182b3c"} Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.683724 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.684810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.684845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.684858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.684808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.684910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.684927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:58 crc kubenswrapper[4858]: I1122 07:10:58.790168 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.556220 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.557693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.557747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.557756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.557777 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.686436 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.686480 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.687723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.687764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.687779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.687823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.687843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4858]: I1122 07:10:59.687852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4858]: E1122 07:10:59.844046 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:11:00 crc kubenswrapper[4858]: I1122 07:11:00.688749 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:00 crc kubenswrapper[4858]: I1122 07:11:00.689806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4858]: I1122 07:11:00.689850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4858]: I1122 07:11:00.689862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.001370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.001572 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.002813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.002862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.002872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.731928 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.732113 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.733411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.733448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4858]: I1122 07:11:02.733468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4858]: I1122 07:11:05.166282 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:11:05 crc kubenswrapper[4858]: I1122 07:11:05.166375 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.075090 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.075275 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.076384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.076538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.076629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.103340 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.103423 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.131641 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.298602 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.298698 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.711424 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.713282 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951" exitCode=255 Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.713358 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951"} Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.713552 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.713577 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.714788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.714819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.714829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.715053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.715092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.715106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.715843 4858 scope.go:117] "RemoveContainer" containerID="1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951" Nov 22 07:11:08 crc kubenswrapper[4858]: I1122 07:11:08.756692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.719204 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.721381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005"} Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.721484 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.721651 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.722471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.722501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.722510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.722700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.722740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4858]: I1122 07:11:09.722750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4858]: E1122 07:11:09.845136 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:11:11 crc kubenswrapper[4858]: I1122 07:11:11.908785 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.108974 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.109146 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.109295 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.110430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.110489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.110509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.113593 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.293153 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.298656 4858 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.298830 4858 trace.go:236] Trace[1423191696]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 07:11:00.056) (total time: 13242ms): Nov 22 07:11:13 crc kubenswrapper[4858]: Trace[1423191696]: ---"Objects listed" error: 13242ms (07:11:13.298) Nov 22 07:11:13 crc kubenswrapper[4858]: Trace[1423191696]: [13.242358354s] [13.242358354s] END Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.298852 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.308695 4858 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.309019 4858 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.310513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.310556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.310566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.310583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.310594 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.328025 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.336529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.336581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.336592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.336608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.336618 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.346057 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.350745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.350788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.350820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.350840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.350852 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.361596 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.365669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.365699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.365708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.365722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.365732 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.375295 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.378363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.378399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.378410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.378426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.378436 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.389573 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.389681 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.391938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.392016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.392035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.392063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.392084 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.432531 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.437104 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.459472 4858 apiserver.go:52] "Watching apiserver" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.461899 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462190 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462536 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462732 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.462744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462939 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.462954 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.463410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.463435 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.464115 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.464936 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.465132 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.465179 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.465290 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.465549 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.465822 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.466021 4858 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.465896 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.466898 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.501544 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.502173 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505496 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505549 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505565 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505612 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505645 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505661 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505727 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505745 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505761 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505847 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505864 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505880 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505924 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505974 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.505990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506014 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506062 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506114 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506130 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506178 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506263 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506286 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506402 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506436 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506465 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506481 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506532 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506547 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506562 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506611 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506693 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506708 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506797 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506811 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506828 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506843 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506943 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.506994 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507026 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507142 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507237 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507289 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507371 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507401 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507418 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507466 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507483 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507556 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507591 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507698 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507760 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507811 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507951 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.507979 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508198 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508274 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508614 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.508934 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.509134 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:14.009111663 +0000 UTC m=+35.850534729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509156 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509187 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509714 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509740 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509800 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509818 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.509991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510017 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510098 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510124 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510150 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510175 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510221 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510476 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510501 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.510660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511057 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511083 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511109 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511196 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511228 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511252 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511372 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511421 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511467 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511487 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511515 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511540 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511589 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511636 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511658 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511679 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511704 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511750 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511759 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511776 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511805 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511833 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511882 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511933 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.511996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512008 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512101 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512182 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512207 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512210 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512230 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512312 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512514 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512837 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512866 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512883 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512897 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512911 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512926 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512942 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512955 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512969 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512981 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.512993 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513005 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513018 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513033 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513046 4858 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513062 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513075 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513088 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513100 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513112 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513126 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513140 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513152 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513164 4858 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513177 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513190 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513202 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513022 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513415 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.513977 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.514032 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.514125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.514478 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.514482 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.514495 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.514545 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:14.014526995 +0000 UTC m=+35.855950111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.514615 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.514646 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:14.014637919 +0000 UTC m=+35.856061045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.514985 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515344 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515516 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515877 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515905 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515925 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.515968 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516339 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516473 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516598 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.516742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.517395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518167 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518266 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518568 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518894 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518914 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.518764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519028 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519159 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519174 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519295 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519435 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519712 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.519736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520212 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520198 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520394 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520456 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520669 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.520826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.521052 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.521378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.521524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.521731 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.521814 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.521970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.522100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.522235 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.522311 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.522802 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.522868 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.514256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.523292 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.523671 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.523680 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.523682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524953 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.524993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525355 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525400 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525444 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525609 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.525932 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526037 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526063 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526305 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526668 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.526912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.527131 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.527206 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.527536 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.527939 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.528304 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.528356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.528673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.528677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.528964 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.529474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.529588 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.529770 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.530004 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.529994 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.530495 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.530496 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.530683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.530779 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.530839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531007 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531149 4858 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531300 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531506 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531627 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531796 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.531859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.532142 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.532162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.532274 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.532491 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.532622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.532898 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.533084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.533107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.533193 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.534802 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.535045 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.535240 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.536110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.536168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.536503 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.536878 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.536970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.537960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.538171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.538417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539391 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539925 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539977 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.540227 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.539925 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.540610 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.540777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.541116 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.541342 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.541523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.541548 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.541901 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.542133 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.542265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.542552 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.548304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.549482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.549814 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.555641 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.557516 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.557933 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.558112 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.558110 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.558757 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.559922 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.562050 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.562396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.563710 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.563734 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.563747 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.563783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.563798 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:14.0637813 +0000 UTC m=+35.905204306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.563932 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.566626 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.566657 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.566670 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.566723 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:14.066709013 +0000 UTC m=+35.908132019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.571250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.574853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.575608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.577980 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.593245 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.593771 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.596058 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.597045 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.598526 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.599313 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.601362 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.602142 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.603811 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.604483 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.605079 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.606101 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.607549 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.609174 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.609928 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.610077 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.610722 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.611279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.611310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.611332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.611349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.611361 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.612192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.612429 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.613062 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614286 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614307 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614333 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614346 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614358 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614370 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614383 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614395 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614408 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614420 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614431 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614443 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614455 4858 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614466 4858 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614479 4858 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614491 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614502 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614514 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614524 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614535 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614546 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614557 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614561 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614570 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614582 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614595 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614606 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614618 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614629 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614642 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614656 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614668 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614679 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614690 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614701 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614714 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614725 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614737 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614748 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614759 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614772 4858 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614783 4858 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614796 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614807 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614819 4858 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614831 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614843 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614854 4858 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.614864 4858 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615284 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615300 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615309 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615331 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615340 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615348 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615356 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615366 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615376 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615385 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615383 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615393 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615402 4858 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615412 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615420 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615428 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615437 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615445 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615453 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615462 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615471 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615479 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615486 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615494 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615502 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615510 4858 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615518 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615525 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615533 4858 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615541 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615549 4858 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615556 4858 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615563 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615571 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615578 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615587 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615594 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615603 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615610 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615618 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615627 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615635 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615642 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615650 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615659 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615666 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615674 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615682 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615689 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615697 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615704 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615711 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615720 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615727 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615735 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615742 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615750 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615758 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615766 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615796 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615896 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615914 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615925 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615935 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615946 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615954 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615962 4858 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615971 4858 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615980 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615989 4858 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.615998 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616000 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616007 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616059 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616072 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616081 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616091 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616099 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616113 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616122 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616131 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616138 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616146 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616154 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616162 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616172 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616184 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616195 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616207 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616218 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616232 4858 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616241 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616250 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616259 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616268 4858 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616276 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616285 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616293 4858 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616301 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616309 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616334 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616343 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616351 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616359 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616368 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616376 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616385 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616394 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616405 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616413 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616421 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616430 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616438 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616448 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616457 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616465 4858 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616474 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616483 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616492 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.616501 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.617434 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.618114 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.619496 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.620153 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.620525 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.620801 4858 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.621451 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.623975 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.624662 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.625893 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.626871 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.628073 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.628870 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.629909 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.630605 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.631907 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.631908 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.632541 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.633696 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.634439 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.635616 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.636169 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.637199 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.638032 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.639501 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.640123 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.641603 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.642174 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.642867 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.644123 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.644710 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.645791 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ttxk5"] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.646133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.649681 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.649980 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.650286 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.659754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.668593 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.678838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.688116 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.705898 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.713100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.713130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.713138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.713151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.713162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.717936 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgd9m\" (UniqueName: \"kubernetes.io/projected/1043a8b7-9753-47c5-88da-a72e0a062eb7-kube-api-access-fgd9m\") pod \"node-resolver-ttxk5\" (UID: \"1043a8b7-9753-47c5-88da-a72e0a062eb7\") " pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.718087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1043a8b7-9753-47c5-88da-a72e0a062eb7-hosts-file\") pod \"node-resolver-ttxk5\" (UID: \"1043a8b7-9753-47c5-88da-a72e0a062eb7\") " pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.718210 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.718284 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.720200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.731336 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: E1122 07:11:13.737155 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.740614 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.742644 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.776715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.784982 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:11:13 crc kubenswrapper[4858]: W1122 07:11:13.792440 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c44806dd30e85ad6e4dfd307e8eecf6535abf6ebce5cd68a45ce3701c7cd7402 WatchSource:0}: Error finding container c44806dd30e85ad6e4dfd307e8eecf6535abf6ebce5cd68a45ce3701c7cd7402: Status 404 returned error can't find the container with id c44806dd30e85ad6e4dfd307e8eecf6535abf6ebce5cd68a45ce3701c7cd7402 Nov 22 07:11:13 crc kubenswrapper[4858]: W1122 07:11:13.795210 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ff71856b7cf3142a6048006db3202da8df98bca5d82af1374567fde8aa4f03a1 WatchSource:0}: Error finding container ff71856b7cf3142a6048006db3202da8df98bca5d82af1374567fde8aa4f03a1: Status 404 returned error can't find the container with id ff71856b7cf3142a6048006db3202da8df98bca5d82af1374567fde8aa4f03a1 Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.801170 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.815525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.815560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.815571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.815588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.815598 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.818659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgd9m\" (UniqueName: \"kubernetes.io/projected/1043a8b7-9753-47c5-88da-a72e0a062eb7-kube-api-access-fgd9m\") pod \"node-resolver-ttxk5\" (UID: \"1043a8b7-9753-47c5-88da-a72e0a062eb7\") " pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.818702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1043a8b7-9753-47c5-88da-a72e0a062eb7-hosts-file\") pod \"node-resolver-ttxk5\" (UID: \"1043a8b7-9753-47c5-88da-a72e0a062eb7\") " pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.818793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1043a8b7-9753-47c5-88da-a72e0a062eb7-hosts-file\") pod \"node-resolver-ttxk5\" (UID: \"1043a8b7-9753-47c5-88da-a72e0a062eb7\") " pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.844888 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgd9m\" (UniqueName: \"kubernetes.io/projected/1043a8b7-9753-47c5-88da-a72e0a062eb7-kube-api-access-fgd9m\") pod \"node-resolver-ttxk5\" (UID: \"1043a8b7-9753-47c5-88da-a72e0a062eb7\") " pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.918246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.918281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.918291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.918308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.918336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4858]: I1122 07:11:13.963857 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ttxk5" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.020058 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.020196 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:15.02017502 +0000 UTC m=+36.861598016 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.020567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.020745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.020710 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.021087 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:15.021070658 +0000 UTC m=+36.862493664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.020915 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.021307 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:15.021297566 +0000 UTC m=+36.862720572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.020369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.021508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.021606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.021748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.021850 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.121503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.121558 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121747 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121754 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121770 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121778 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121785 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121790 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121839 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:15.121822519 +0000 UTC m=+36.963245525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.121864 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:15.12184876 +0000 UTC m=+36.963271766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.125197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.125241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.125252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.125280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.125294 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.227622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.227656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.227667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.227696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.227707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.330448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.330519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.330530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.330551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.330564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.432789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.433004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.433092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.433218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.433304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.534734 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.534888 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.536427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.536461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.536473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.536488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.536500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.639060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.639100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.639111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.639128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.639139 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.733875 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ttxk5" event={"ID":"1043a8b7-9753-47c5-88da-a72e0a062eb7","Type":"ContainerStarted","Data":"4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.733929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ttxk5" event={"ID":"1043a8b7-9753-47c5-88da-a72e0a062eb7","Type":"ContainerStarted","Data":"1aff50c19352891096524e64df707364fa3661d548275b8dee5c09a7fda401c0"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.736049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.736083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.736095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3f6f13c465b93ff908071965d62c876ecb5573bd1e6629435a00fcbaac644f9a"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.737556 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ff71856b7cf3142a6048006db3202da8df98bca5d82af1374567fde8aa4f03a1"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.738934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.738965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c44806dd30e85ad6e4dfd307e8eecf6535abf6ebce5cd68a45ce3701c7cd7402"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.740724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.740827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.740886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.740947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.741003 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.747400 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.756103 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.768786 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.780601 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.790561 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.800389 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.808937 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.818184 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: E1122 07:11:14.823411 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.834351 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.844048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.844095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.844109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.844129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.844141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.855443 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.866945 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.876773 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.886221 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.895970 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.905547 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.914795 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.924927 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.933613 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.946462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.946505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.946514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.946531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.946541 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.988565 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qkh9t"] Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.989013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.989074 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zbjb2"] Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.989750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.991502 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.991534 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.991645 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.994024 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.995181 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.995380 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.995557 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.996162 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncp4k"] Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.997658 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.997925 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.997960 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 22 07:11:14 crc kubenswrapper[4858]: I1122 07:11:14.998344 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.000747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-56l5j"] Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.001081 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009430 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009578 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.010018 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009639 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009717 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009829 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009915 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.009973 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.011297 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.024109 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.029015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.029154 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:17.029126995 +0000 UTC m=+38.870550001 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.029224 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.029260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.029366 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.029420 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:17.029405334 +0000 UTC m=+38.870828330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.029577 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.029691 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:17.029664452 +0000 UTC m=+38.871087518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.034757 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.049084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.049134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.049147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.049192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.049206 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.051576 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.064722 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.080422 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.097203 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.113811 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.129616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-cnibin\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.129664 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6492476-649f-4291-81c3-e6f5a6398b70-cni-binary-copy\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.129686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-slash\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.129846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-system-cni-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.129908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a6492476-649f-4291-81c3-e6f5a6398b70-multus-daemon-config\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.129963 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-system-cni-dir\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-log-socket\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130077 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-script-lib\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-config\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130193 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-rootfs\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-systemd\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-node-log\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-env-overrides\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovn-node-metrics-cert\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130308 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-netns\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-multus-certs\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-netd\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-cni-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130504 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-netns\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130577 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-bin\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-socket-dir-parent\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-kubelet\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-conf-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthw4\" (UniqueName: \"kubernetes.io/projected/e7ea6513-de67-4c47-8329-7f922012c318-kube-api-access-tthw4\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-os-release\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130830 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s68f9\" (UniqueName: \"kubernetes.io/projected/a6492476-649f-4291-81c3-e6f5a6398b70-kube-api-access-s68f9\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrztm\" (UniqueName: \"kubernetes.io/projected/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-kube-api-access-xrztm\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7ea6513-de67-4c47-8329-7f922012c318-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-cni-bin\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.130960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-proxy-tls\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-etc-kubernetes\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6nb\" (UniqueName: \"kubernetes.io/projected/14e03227-73ca-4f1f-b3e0-28a197f72b42-kube-api-access-dk6nb\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131046 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-cni-multus\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-kubelet\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-hostroot\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-os-release\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131148 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-k8s-cni-cncf-io\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-ovn\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e7ea6513-de67-4c47-8329-7f922012c318-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-systemd-units\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-mcd-auth-proxy-config\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-var-lib-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131425 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131447 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131458 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131464 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131473 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131496 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131432 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131542 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:17.131504307 +0000 UTC m=+38.972927493 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.131565 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:17.131556168 +0000 UTC m=+38.972979394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-cnibin\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.131643 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-etc-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.149484 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.151501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.151545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.151557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.151577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.151587 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.172468 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.188084 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.204224 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.232978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-config\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233039 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-rootfs\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-systemd\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-node-log\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-env-overrides\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233156 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovn-node-metrics-cert\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-netns\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-multus-certs\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-netd\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-cni-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-netns\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-bin\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-socket-dir-parent\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-kubelet\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-conf-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthw4\" (UniqueName: \"kubernetes.io/projected/e7ea6513-de67-4c47-8329-7f922012c318-kube-api-access-tthw4\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-os-release\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s68f9\" (UniqueName: \"kubernetes.io/projected/a6492476-649f-4291-81c3-e6f5a6398b70-kube-api-access-s68f9\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrztm\" (UniqueName: \"kubernetes.io/projected/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-kube-api-access-xrztm\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7ea6513-de67-4c47-8329-7f922012c318-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-cni-bin\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-proxy-tls\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-etc-kubernetes\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk6nb\" (UniqueName: \"kubernetes.io/projected/14e03227-73ca-4f1f-b3e0-28a197f72b42-kube-api-access-dk6nb\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-cni-multus\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233588 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-kubelet\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-hostroot\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233628 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-os-release\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-k8s-cni-cncf-io\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-ovn\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e7ea6513-de67-4c47-8329-7f922012c318-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-systemd-units\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-mcd-auth-proxy-config\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-var-lib-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233758 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-cnibin\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-etc-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-cnibin\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233838 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6492476-649f-4291-81c3-e6f5a6398b70-cni-binary-copy\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-slash\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-system-cni-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a6492476-649f-4291-81c3-e6f5a6398b70-multus-daemon-config\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-system-cni-dir\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-log-socket\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.233935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-script-lib\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.234707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.234726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-script-lib\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235015 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-etc-kubernetes\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-bin\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-netns\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-rootfs\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-systemd\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-node-log\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235462 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-config\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-os-release\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-socket-dir-parent\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-kubelet\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-conf-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235775 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-multus-certs\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-env-overrides\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-cni-multus\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-netd\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235881 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-kubelet\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-hostroot\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.235987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-os-release\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-k8s-cni-cncf-io\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236055 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-ovn\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-multus-cni-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-var-lib-cni-bin\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-host-run-netns\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7ea6513-de67-4c47-8329-7f922012c318-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-etc-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-systemd-units\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236371 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236387 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-ovn-kubernetes\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-var-lib-openvswitch\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236422 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-cnibin\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236454 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-system-cni-dir\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-slash\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7ea6513-de67-4c47-8329-7f922012c318-system-cni-dir\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-log-socket\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.236556 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6492476-649f-4291-81c3-e6f5a6398b70-cnibin\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.237121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6492476-649f-4291-81c3-e6f5a6398b70-cni-binary-copy\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.237174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e7ea6513-de67-4c47-8329-7f922012c318-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.237195 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-mcd-auth-proxy-config\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.237222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a6492476-649f-4291-81c3-e6f5a6398b70-multus-daemon-config\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.239248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-proxy-tls\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.246268 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.251041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovn-node-metrics-cert\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.254112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.254148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.254156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.254171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.254180 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.259935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthw4\" (UniqueName: \"kubernetes.io/projected/e7ea6513-de67-4c47-8329-7f922012c318-kube-api-access-tthw4\") pod \"multus-additional-cni-plugins-zbjb2\" (UID: \"e7ea6513-de67-4c47-8329-7f922012c318\") " pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.259943 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrztm\" (UniqueName: \"kubernetes.io/projected/4ac3f217-ad73-4e89-b703-b42a3c6c9ed4-kube-api-access-xrztm\") pod \"machine-config-daemon-qkh9t\" (UID: \"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\") " pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.269145 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.273491 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s68f9\" (UniqueName: \"kubernetes.io/projected/a6492476-649f-4291-81c3-e6f5a6398b70-kube-api-access-s68f9\") pod \"multus-56l5j\" (UID: \"a6492476-649f-4291-81c3-e6f5a6398b70\") " pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.273514 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk6nb\" (UniqueName: \"kubernetes.io/projected/14e03227-73ca-4f1f-b3e0-28a197f72b42-kube-api-access-dk6nb\") pod \"ovnkube-node-ncp4k\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.284801 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.295614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.306586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.311037 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.317447 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.318511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.327117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.330488 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.334826 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-56l5j" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.352029 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.357274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.357357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.357369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.357386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.357397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.371661 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: W1122 07:11:15.382547 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6492476_649f_4291_81c3_e6f5a6398b70.slice/crio-efbdab29fb5549cae21f51353ef8ecae63ae58b928829916caaf7f2192ade9b0 WatchSource:0}: Error finding container efbdab29fb5549cae21f51353ef8ecae63ae58b928829916caaf7f2192ade9b0: Status 404 returned error can't find the container with id efbdab29fb5549cae21f51353ef8ecae63ae58b928829916caaf7f2192ade9b0 Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.392230 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.461834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.461882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.461892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.461910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.461925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.535457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.535614 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.535746 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:15 crc kubenswrapper[4858]: E1122 07:11:15.535873 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.541845 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.564223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.564268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.564279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.564295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.564311 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.598092 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.600093 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.666851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.666886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.666896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.666911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.666921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.744176 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.744228 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"907f73ff457f2be47e09bf767af72d1137c237e185bc3683711a8ec33e40c7fd"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.746094 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7" exitCode=0 Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.746229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.746274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"7eac266bcff4c7c476e55991826351d6846f2174d30e9044d8bf0cb20d229fd0"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.749232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerStarted","Data":"fc1723772bc724876b45210f670a07803748704c01123ec13720a0011a661dab"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.750902 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerStarted","Data":"63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.750972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerStarted","Data":"efbdab29fb5549cae21f51353ef8ecae63ae58b928829916caaf7f2192ade9b0"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.762766 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.769211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.769249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.769260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.769277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.769289 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.778087 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.791375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.802559 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.814214 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.823658 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.834055 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.845507 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.864035 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.872021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.872060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.872071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.872088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.872100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.876058 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.887527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.898101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.913404 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.926237 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.941253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.955683 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.973262 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.974627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.974650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.974661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.974677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.974689 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4858]: I1122 07:11:15.991583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.004388 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.018113 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.030426 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.043936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.055349 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.066402 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.078898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.078948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.078959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.078975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.078986 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.080015 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.096064 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.181926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.181971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.181991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.182008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.182017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.285286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.285704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.285719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.285746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.285756 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.388221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.388270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.388283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.388298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.388309 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.466952 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8b5rw"] Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.467354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.470515 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.470857 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.471599 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.471746 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.484861 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.491270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.491307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.491334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.491352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.491363 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.500645 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.516914 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.531886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.535586 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:16 crc kubenswrapper[4858]: E1122 07:11:16.535734 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.546710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86hzb\" (UniqueName: \"kubernetes.io/projected/095c751d-e5c2-4c33-9041-4bdcb32f1269-kube-api-access-86hzb\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.546788 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/095c751d-e5c2-4c33-9041-4bdcb32f1269-host\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.546827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/095c751d-e5c2-4c33-9041-4bdcb32f1269-serviceca\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.547491 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.559123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.572228 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.587097 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.596684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.596758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.596770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.596793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.596808 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.614551 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.632201 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.647669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/095c751d-e5c2-4c33-9041-4bdcb32f1269-serviceca\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.647741 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86hzb\" (UniqueName: \"kubernetes.io/projected/095c751d-e5c2-4c33-9041-4bdcb32f1269-kube-api-access-86hzb\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.647768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/095c751d-e5c2-4c33-9041-4bdcb32f1269-host\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.647817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/095c751d-e5c2-4c33-9041-4bdcb32f1269-host\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.648016 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.649801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/095c751d-e5c2-4c33-9041-4bdcb32f1269-serviceca\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.662592 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.669099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86hzb\" (UniqueName: \"kubernetes.io/projected/095c751d-e5c2-4c33-9041-4bdcb32f1269-kube-api-access-86hzb\") pod \"node-ca-8b5rw\" (UID: \"095c751d-e5c2-4c33-9041-4bdcb32f1269\") " pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.679610 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.694734 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.699460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.699503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.699512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.699529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.699539 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.757022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.757075 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.757087 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.759306 4858 generic.go:334] "Generic (PLEG): container finished" podID="e7ea6513-de67-4c47-8329-7f922012c318" containerID="fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac" exitCode=0 Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.759434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerDied","Data":"fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.762153 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.763737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.776937 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.780556 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8b5rw" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.799698 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.803310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.803368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.803381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.803401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.803413 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.818638 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.842977 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.856090 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.871855 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.890553 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.906806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.906856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.906867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.906887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.906900 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.908694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.924461 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4858]: I1122 07:11:16.969467 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.004838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.010587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.010627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.010638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.010653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.010664 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.046812 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.052194 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.052356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.052413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.052524 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:21.052493357 +0000 UTC m=+42.893916363 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.052530 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.052601 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:21.05259484 +0000 UTC m=+42.894017836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.052531 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.052651 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:21.052645422 +0000 UTC m=+42.894068418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.086091 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.113810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.113856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.113870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.113889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.113903 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.125833 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.153547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.153600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153735 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153770 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153781 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153827 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153868 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153884 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153848 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:21.153822766 +0000 UTC m=+42.995245822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.153975 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:21.153961481 +0000 UTC m=+42.995384507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.167120 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.205522 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.216906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.216942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.216951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.216968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.216979 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.243729 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.284505 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.320464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.320508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.320521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.320540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.320552 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.334583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.335733 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.387089 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.424451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.424497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.424508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.424525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.424535 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.427491 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.466366 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.506181 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.526431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.526483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.526494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.526510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.526522 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.534938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.534940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.535080 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:17 crc kubenswrapper[4858]: E1122 07:11:17.535246 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.552566 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.587296 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.626029 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.628696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.628740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.628751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.628770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.628782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.668107 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.703665 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.731579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.731621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.731634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.731653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.731665 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.767871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerStarted","Data":"459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.770670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.770710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.770725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.772019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8b5rw" event={"ID":"095c751d-e5c2-4c33-9041-4bdcb32f1269","Type":"ContainerStarted","Data":"f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.772053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8b5rw" event={"ID":"095c751d-e5c2-4c33-9041-4bdcb32f1269","Type":"ContainerStarted","Data":"7f47def8ea771f7299427541c42636aa9befbd1834ba5adaa384529085d627e6"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.782035 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.791157 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.827245 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.833826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.833862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.833873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.833890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.833899 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.866805 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.902867 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.936368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.936409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.936732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.936770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.936790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.948310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4858]: I1122 07:11:17.988169 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.035599 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.039365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.039405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.039414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.039428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.039439 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.065862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.104714 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.143285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.143364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.143379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.143397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.143674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.144214 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.193129 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.226288 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.246376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.246426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.246439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.246460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.246478 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.264882 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.304708 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.347937 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.349485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.349537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.349548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.349568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.349580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.384894 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.424752 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.451780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.451815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.451825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.451842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.451853 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.466116 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.503963 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.534810 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:18 crc kubenswrapper[4858]: E1122 07:11:18.534969 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.542201 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.554027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.554072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.554083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.554101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.554113 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.586634 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.625969 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.656338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.656378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.656387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.656402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.656411 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.665268 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.704840 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.745394 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.759397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.759464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.759516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.759531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.759541 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.787891 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.824930 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.862029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.862076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.862086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.862102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.862113 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.965200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.965246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.965256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.965272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4858]: I1122 07:11:18.965282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.067965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.068008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.068019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.068037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.068051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.170653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.170712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.170725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.170743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.170758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.273218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.273272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.273284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.273305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.273343 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.376457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.376494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.376503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.376517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.376528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.481356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.481397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.481405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.481420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.481429 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.534916 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:19 crc kubenswrapper[4858]: E1122 07:11:19.535609 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.536091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:19 crc kubenswrapper[4858]: E1122 07:11:19.537014 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.552532 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.565546 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.582498 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.584784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.584914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.584927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.584943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.585341 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.592693 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.606065 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.624651 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.636943 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.650057 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.664366 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.676511 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.688585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.688625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.688635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.688651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.688660 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.692052 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.713551 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.728144 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.744251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.780457 4858 generic.go:334] "Generic (PLEG): container finished" podID="e7ea6513-de67-4c47-8329-7f922012c318" containerID="459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41" exitCode=0 Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.780544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerDied","Data":"459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.791921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.791968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.791980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.791999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.792012 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.801240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.819381 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.833744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.850710 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.870196 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.885704 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.894206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.894252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.894262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.894281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.894293 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.896527 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.901678 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.915888 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.929888 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.942108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.955194 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.964566 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.975420 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.992310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.998230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.998283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.998295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.998312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4858]: I1122 07:11:19.998348 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.101401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.101442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.101454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.101471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.101483 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.206548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.206581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.206589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.206603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.206612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.309227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.309266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.309275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.309290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.309299 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.411592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.411633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.411646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.411663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.411676 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.514006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.514278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.514296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.514311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.514352 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.534951 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:20 crc kubenswrapper[4858]: E1122 07:11:20.535124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.617120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.617161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.617171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.617192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.617204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.720009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.720048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.720073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.720093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.720106 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.797283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.799602 4858 generic.go:334] "Generic (PLEG): container finished" podID="e7ea6513-de67-4c47-8329-7f922012c318" containerID="1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe" exitCode=0 Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.799639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerDied","Data":"1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.817987 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.822174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.822228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.822241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.822259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.822274 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.833979 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.843767 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.854362 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.868895 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.881405 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.891513 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.901529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.912581 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.925355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.925395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.925406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.925426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.925438 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.931052 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.943178 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.957294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.968936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4858]: I1122 07:11:20.977383 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.028106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.028145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.028161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.028178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.028188 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.095918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.096043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.096093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.096211 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.096279 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:29.096259108 +0000 UTC m=+50.937682124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.096516 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.096577 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:29.096548978 +0000 UTC m=+50.937971984 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.096603 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:29.096593928 +0000 UTC m=+50.938017154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.130450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.130486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.130497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.130512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.130523 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.197150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.197206 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197414 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197416 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197466 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197485 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197545 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:29.197525005 +0000 UTC m=+51.038948011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197436 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197579 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.197637 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:29.197617958 +0000 UTC m=+51.039041034 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.233236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.233268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.233288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.233305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.233330 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.335931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.335974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.335984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.336001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.336012 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.440397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.440432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.440443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.440459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.440468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.534799 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.534813 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.535009 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:21 crc kubenswrapper[4858]: E1122 07:11:21.535077 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.542860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.542934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.542948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.542985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.542999 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.646577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.646608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.646616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.646631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.646642 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.750270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.750313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.750348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.750366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.750378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.852641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.852685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.852697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.852713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.852725 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.956082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.956182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.956196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.956218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4858]: I1122 07:11:21.956231 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.059147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.059188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.059203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.059218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.059229 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.161778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.161811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.161820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.161834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.161885 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.264921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.264974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.264984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.265006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.265018 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.367831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.367878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.367889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.367906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.367918 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.470589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.470631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.470640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.470657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.470672 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.535364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:22 crc kubenswrapper[4858]: E1122 07:11:22.535524 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.573219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.573261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.573272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.573288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.573298 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.676192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.676230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.676239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.676253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.676263 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.784579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.784877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.784888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.784904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.784916 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.888059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.888110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.888121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.888143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.888155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.991773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.991817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.991829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.991849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4858]: I1122 07:11:22.991862 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.094588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.094635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.094646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.094665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.094677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.198154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.198355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.198445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.198513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.198580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.301372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.301414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.301423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.301442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.301454 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.404033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.404086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.404099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.404122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.404136 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.506678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.506744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.506758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.506782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.506795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.535635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.535635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.535817 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.536035 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.609479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.609509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.609520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.609536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.609546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.713483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.713513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.713521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.713535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.713545 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.748623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.748663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.748676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.748692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.748702 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.761375 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.765507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.765540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.765549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.765567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.765577 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.778836 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.782754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.782785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.782794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.782808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.782817 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.797383 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.802027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.802079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.802090 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.802110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.802125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.816044 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.816654 4858 generic.go:334] "Generic (PLEG): container finished" podID="e7ea6513-de67-4c47-8329-7f922012c318" containerID="026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406" exitCode=0 Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.816727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerDied","Data":"026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.819483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.819518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.819532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.819550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.819563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.836422 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.838794 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.838811 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.839403 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: E1122 07:11:23.839513 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.840289 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.841494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.841557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.841571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.841597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.841610 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.857197 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.871618 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.882111 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.886442 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.900106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.912282 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.928417 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.945127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.945179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.945190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.945208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.945220 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.948464 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.966194 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.981935 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4858]: I1122 07:11:23.997210 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.011462 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.039069 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.049309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.049381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.049393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.049413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.049425 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.054518 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.067907 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.083101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.095279 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.113045 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.137930 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.152088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.152132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.152143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.152161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.152170 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.152780 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.167836 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.182778 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.195385 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.212064 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.224975 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.239502 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.254695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.254744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.254755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.254772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.254782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.256152 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.271380 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.357079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.357124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.357133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.357149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.357158 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.460054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.460106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.460118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.460137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.460149 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.535614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:24 crc kubenswrapper[4858]: E1122 07:11:24.535771 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.562291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.562352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.562364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.562381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.562391 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.665959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.666034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.666135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.666168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.666183 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.769482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.769524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.769533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.769548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.769559 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.841303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerStarted","Data":"5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.841859 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.855186 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.864515 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.868729 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.872668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.872720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.872732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.872749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.872762 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.880912 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.895733 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.908536 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.919295 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.932153 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.948137 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.967893 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.975099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.975149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.975161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.975178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.975190 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.982192 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4858]: I1122 07:11:24.995512 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.007102 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.018766 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.030039 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.043680 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.057215 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.068082 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.078898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.078956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.078971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.078993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.079011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.084633 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.090841 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.108051 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.124425 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.139030 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.157003 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.176609 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.182205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.182280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.182293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.182365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.182381 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.192609 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.209950 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.222721 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.246655 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.263772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.277650 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.285474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.285512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.285522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.285537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.285546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.293096 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.306240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.319057 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.335108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.352439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.368285 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.382565 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.387961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.388024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.388037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.388059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.388070 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.396237 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.408527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.430309 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.442576 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.456807 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.471167 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.490655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.490712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.490725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.490746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.490759 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.535107 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.535107 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:25 crc kubenswrapper[4858]: E1122 07:11:25.535276 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:25 crc kubenswrapper[4858]: E1122 07:11:25.535374 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.593032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.593063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.593074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.593088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.593099 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.696036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.696067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.696076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.696091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.696100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.799138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.799204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.799218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.799245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.799263 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.849676 4858 generic.go:334] "Generic (PLEG): container finished" podID="e7ea6513-de67-4c47-8329-7f922012c318" containerID="5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb" exitCode=0 Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.849783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerDied","Data":"5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.868769 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.884297 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.895962 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.903119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.903156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.903167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.903183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.903196 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.910669 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.928193 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.954549 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.976275 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4858]: I1122 07:11:25.992138 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.004770 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.010260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.010311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.010363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.010382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.010392 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.020110 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.035218 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.051955 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.067709 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.081154 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.113461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.113507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.113516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.113532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.113543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.216852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.216890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.216901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.216918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.216933 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.319801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.319869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.319883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.319952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.319968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.423459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.423504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.423514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.423532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.423542 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.526628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.526711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.526724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.526743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.526774 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.534832 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:26 crc kubenswrapper[4858]: E1122 07:11:26.534964 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.629054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.629095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.629104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.629120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.629129 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.732071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.732124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.732135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.732153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.732165 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.835711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.835811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.835831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.835853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.835868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.856578 4858 generic.go:334] "Generic (PLEG): container finished" podID="e7ea6513-de67-4c47-8329-7f922012c318" containerID="befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45" exitCode=0 Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.856661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerDied","Data":"befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.876109 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.890504 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.903434 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.914414 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.927731 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.938827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.938877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.938895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.938917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.938931 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.941737 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.956251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.970130 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.980881 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.995462 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.998075 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d"] Nov 22 07:11:26 crc kubenswrapper[4858]: I1122 07:11:26.998959 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.001391 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.001743 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.012224 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.027855 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.040251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.041931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.041959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.041967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.041982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.041994 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.055111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.068196 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.084136 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.101503 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.114735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.129744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.144694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.144982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.145030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.145043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.145062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.145072 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.163272 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.163711 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7518346-69ca-444a-bcb3-26bdab4870a0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.163754 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drbp\" (UniqueName: \"kubernetes.io/projected/d7518346-69ca-444a-bcb3-26bdab4870a0-kube-api-access-8drbp\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.163818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7518346-69ca-444a-bcb3-26bdab4870a0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.163862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7518346-69ca-444a-bcb3-26bdab4870a0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.181525 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.198163 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.247955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.248037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.248053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.248080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.248094 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.258338 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.265139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7518346-69ca-444a-bcb3-26bdab4870a0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.265209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7518346-69ca-444a-bcb3-26bdab4870a0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.265241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7518346-69ca-444a-bcb3-26bdab4870a0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.265269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drbp\" (UniqueName: \"kubernetes.io/projected/d7518346-69ca-444a-bcb3-26bdab4870a0-kube-api-access-8drbp\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.265916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7518346-69ca-444a-bcb3-26bdab4870a0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.266548 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7518346-69ca-444a-bcb3-26bdab4870a0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.281212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7518346-69ca-444a-bcb3-26bdab4870a0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.281433 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.288871 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drbp\" (UniqueName: \"kubernetes.io/projected/d7518346-69ca-444a-bcb3-26bdab4870a0-kube-api-access-8drbp\") pod \"ovnkube-control-plane-749d76644c-bkm2d\" (UID: \"d7518346-69ca-444a-bcb3-26bdab4870a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.299858 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.315563 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.328746 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.337300 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.346225 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.350807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.350831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.350839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.350853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.350862 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: W1122 07:11:27.355256 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7518346_69ca_444a_bcb3_26bdab4870a0.slice/crio-daa55181f0f93b26c391db4e441bc7bbcce5e2725a1be37cd4361081697f6466 WatchSource:0}: Error finding container daa55181f0f93b26c391db4e441bc7bbcce5e2725a1be37cd4361081697f6466: Status 404 returned error can't find the container with id daa55181f0f93b26c391db4e441bc7bbcce5e2725a1be37cd4361081697f6466 Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.454379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.454418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.454431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.454447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.454461 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.534679 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:27 crc kubenswrapper[4858]: E1122 07:11:27.534795 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.535088 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:27 crc kubenswrapper[4858]: E1122 07:11:27.535142 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.556664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.556691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.556701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.556713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.556721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.662988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.663469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.663511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.663530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.663546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.765925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.765980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.765997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.766018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.766033 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.864912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" event={"ID":"d7518346-69ca-444a-bcb3-26bdab4870a0","Type":"ContainerStarted","Data":"daa55181f0f93b26c391db4e441bc7bbcce5e2725a1be37cd4361081697f6466"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.868408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.868447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.868462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.868480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.868496 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.870466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" event={"ID":"e7ea6513-de67-4c47-8329-7f922012c318","Type":"ContainerStarted","Data":"5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.885840 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.898089 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.908932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.918907 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.930392 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.943250 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.959006 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.970772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.970808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.970818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.970838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.970848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.970957 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.985655 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4858]: I1122 07:11:27.996993 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.007979 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.018679 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.028978 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.042614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.063747 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.074236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.074282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.074297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.074313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.074347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.176495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.176528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.176537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.176551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.176560 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.279158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.279203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.279218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.279243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.279259 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.381774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.381861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.381884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.381908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.381925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.472548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-m2bfv"] Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.473029 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:28 crc kubenswrapper[4858]: E1122 07:11:28.473096 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.484390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.484455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.484473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.484502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.484519 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.485130 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.497754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.513556 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.524905 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.534968 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:28 crc kubenswrapper[4858]: E1122 07:11:28.535140 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.536371 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.550977 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.578243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.579456 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.579484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzvt\" (UniqueName: \"kubernetes.io/projected/668a4495-5031-4084-9b05-d5d73dd20613-kube-api-access-gxzvt\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.593774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.593846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.593860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.593881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.593900 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.601578 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.623866 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.636430 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.647074 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.659128 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.672000 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.680811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.680880 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzvt\" (UniqueName: \"kubernetes.io/projected/668a4495-5031-4084-9b05-d5d73dd20613-kube-api-access-gxzvt\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:28 crc kubenswrapper[4858]: E1122 07:11:28.681027 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:28 crc kubenswrapper[4858]: E1122 07:11:28.681109 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:29.181089919 +0000 UTC m=+51.022512925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.691243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.701271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.701354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.701365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.701385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.701397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.702157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzvt\" (UniqueName: \"kubernetes.io/projected/668a4495-5031-4084-9b05-d5d73dd20613-kube-api-access-gxzvt\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.707239 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.720278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.803628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.803662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.803675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.803691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.803703 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.874447 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" event={"ID":"d7518346-69ca-444a-bcb3-26bdab4870a0","Type":"ContainerStarted","Data":"5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.874492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" event={"ID":"d7518346-69ca-444a-bcb3-26bdab4870a0","Type":"ContainerStarted","Data":"58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.876516 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/0.log" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.879962 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883" exitCode=1 Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.880056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.881153 4858 scope.go:117] "RemoveContainer" containerID="a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.905458 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.907877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.907907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.907944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.907966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.907979 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.921838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.936410 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.952588 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.965897 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.978182 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:28 crc kubenswrapper[4858]: I1122 07:11:28.990312 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.006111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.009885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.009917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.009926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.009943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.009956 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.020285 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.038710 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.051716 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.065337 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.078153 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.097062 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.107568 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.111950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.111982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.111995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.112012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.112024 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.119950 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.132648 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.142542 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.157201 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.169240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.184571 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.189246 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.189368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.189390 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.189420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.189818 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:45.189794352 +0000 UTC m=+67.031217368 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.189834 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.189910 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:45.189899045 +0000 UTC m=+67.031322051 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.189914 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.189958 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:30.189947207 +0000 UTC m=+52.031370213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.190023 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.190067 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:45.19005593 +0000 UTC m=+67.031478936 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.198160 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.209204 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.214510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.214591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.214601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.214618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.214629 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.220624 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.231343 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.258017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"47 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:28.387770 6047 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:28.387783 6047 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:11:28.387787 6047 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:28.387788 6047 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 07:11:28.387812 6047 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 07:11:28.387793 6047 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387836 6047 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387896 6047 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:11:28.388412 6047 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:28.388456 6047 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:28.388475 6047 factory.go:656] Stopping watch factory\\\\nI1122 07:11:28.388488 6047 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:28.388511 6047 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.269973 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.285235 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.290662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.290695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.290793 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.290807 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.290817 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.290849 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:45.290838802 +0000 UTC m=+67.132261808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.291067 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.291079 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.291086 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.291110 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:45.29110305 +0000 UTC m=+67.132526056 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.296865 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.310010 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.316998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.317026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.317062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.317078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.317086 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.319275 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.331556 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.420192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.420219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.420228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.420241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.420249 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.522226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.522441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.522464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.522489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.522517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.535644 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.535896 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.535697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:29 crc kubenswrapper[4858]: E1122 07:11:29.536129 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.552936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.563919 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.581642 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.602967 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.616081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.625827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.625869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.625880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.625897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.625910 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.638575 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.654022 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.673903 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.693965 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.707257 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.719990 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.729030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.729066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.729075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.729089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.729098 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.736218 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.759501 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"47 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:28.387770 6047 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:28.387783 6047 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:11:28.387787 6047 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:28.387788 6047 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 07:11:28.387812 6047 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 07:11:28.387793 6047 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387836 6047 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387896 6047 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:11:28.388412 6047 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:28.388456 6047 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:28.388475 6047 factory.go:656] Stopping watch factory\\\\nI1122 07:11:28.388488 6047 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:28.388511 6047 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.773002 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.788138 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.803079 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.832109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.832150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.832160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.832175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.832186 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.885889 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/0.log" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.889949 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.918991 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.933062 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.934980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.935022 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.935035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.935053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.935065 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.942959 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.955126 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.968250 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.979547 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:29 crc kubenswrapper[4858]: I1122 07:11:29.993345 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:29Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.005638 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.017337 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.029344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.038304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.038630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.038718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.038810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.038890 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.047352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"47 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:28.387770 6047 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:28.387783 6047 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:11:28.387787 6047 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:28.387788 6047 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 07:11:28.387812 6047 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 07:11:28.387793 6047 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387836 6047 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387896 6047 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:11:28.388412 6047 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:28.388456 6047 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:28.388475 6047 factory.go:656] Stopping watch factory\\\\nI1122 07:11:28.388488 6047 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:28.388511 6047 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.065709 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.113243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.141150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.141222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.141237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.141258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.141271 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.151372 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.188219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.201292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:30 crc kubenswrapper[4858]: E1122 07:11:30.201511 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:30 crc kubenswrapper[4858]: E1122 07:11:30.201575 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:32.201555945 +0000 UTC m=+54.042978951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.227109 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.244040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.244111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.244124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.244141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.244154 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.346247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.346283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.346292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.346306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.346337 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.449300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.449365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.449378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.449397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.449409 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.535302 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:30 crc kubenswrapper[4858]: E1122 07:11:30.535460 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.535514 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:30 crc kubenswrapper[4858]: E1122 07:11:30.535674 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.552005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.552034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.552044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.552057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.552066 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.654469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.654516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.654526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.654548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.654559 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.757552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.757664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.757684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.757723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.757743 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.861106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.861172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.861187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.861208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.861224 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.896397 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/1.log" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.897415 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/0.log" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.900954 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a" exitCode=1 Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.901016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.901094 4858 scope.go:117] "RemoveContainer" containerID="a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.901986 4858 scope.go:117] "RemoveContainer" containerID="297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a" Nov 22 07:11:30 crc kubenswrapper[4858]: E1122 07:11:30.902259 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.921197 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.932351 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.944817 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.953213 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.963188 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.963958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.963997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.964026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.964041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.964052 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.977586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4858]: I1122 07:11:30.988671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.005465 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.020955 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.036132 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.048223 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.060830 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.067784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.067836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.067846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.067867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.067880 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.081415 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"47 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:28.387770 6047 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:28.387783 6047 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:11:28.387787 6047 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:28.387788 6047 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 07:11:28.387812 6047 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 07:11:28.387793 6047 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387836 6047 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387896 6047 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:11:28.388412 6047 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:28.388456 6047 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:28.388475 6047 factory.go:656] Stopping watch factory\\\\nI1122 07:11:28.388488 6047 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:28.388511 6047 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"-resolver-ttxk5\\\\nI1122 07:11:30.113927 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qkh9t after 0 failed attempt(s)\\\\nI1122 07:11:30.113950 6344 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qkh9t\\\\nI1122 07:11:30.113945 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1122 07:11:30.113833 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbjb2 after 0 failed attempt(s)\\\\nI1122 07:11:30.113980 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113985 6344 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbjb2\\\\nI1122 07:11:30.113989 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113746 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:11:30.114002 6344 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:11:30.114016 6344 obj_retry.go:386] Retry successful for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.094859 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.109975 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.123408 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.171240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.171344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.171360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.171382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.171396 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.273646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.273717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.273731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.273746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.273757 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.376204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.376243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.376256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.376278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.376289 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.478764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.478831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.478851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.478872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.478885 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.534794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:31 crc kubenswrapper[4858]: E1122 07:11:31.534919 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.534959 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:31 crc kubenswrapper[4858]: E1122 07:11:31.535103 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.581253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.581292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.581300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.581313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.581343 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.683729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.683799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.683811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.683832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.683845 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.786421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.786477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.786492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.786516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.786530 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.888925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.888998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.889010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.889026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.889036 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.905611 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/1.log" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.991730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.991762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.991774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.991790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4858]: I1122 07:11:31.991799 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.005589 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.017187 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.021357 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.031143 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.039352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.048027 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.057747 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.068828 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.078361 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.089520 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.093525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.093561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.093572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.093588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.093599 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.104756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.120791 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"47 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:28.387770 6047 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:28.387783 6047 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:11:28.387787 6047 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:28.387788 6047 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 07:11:28.387812 6047 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 07:11:28.387793 6047 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387836 6047 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387896 6047 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:11:28.388412 6047 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:28.388456 6047 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:28.388475 6047 factory.go:656] Stopping watch factory\\\\nI1122 07:11:28.388488 6047 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:28.388511 6047 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"-resolver-ttxk5\\\\nI1122 07:11:30.113927 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qkh9t after 0 failed attempt(s)\\\\nI1122 07:11:30.113950 6344 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qkh9t\\\\nI1122 07:11:30.113945 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1122 07:11:30.113833 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbjb2 after 0 failed attempt(s)\\\\nI1122 07:11:30.113980 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113985 6344 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbjb2\\\\nI1122 07:11:30.113989 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113746 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:11:30.114002 6344 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:11:30.114016 6344 obj_retry.go:386] Retry successful for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.129916 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.142097 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.156402 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.170643 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.185220 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.196009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.196049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.196060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.196076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.196087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.201453 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.222931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:32 crc kubenswrapper[4858]: E1122 07:11:32.223046 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:32 crc kubenswrapper[4858]: E1122 07:11:32.223098 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:36.2230825 +0000 UTC m=+58.064505506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.298496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.298534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.298543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.298556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.298564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.402083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.402124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.402137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.402155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.402167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.504974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.505040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.505050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.505064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.505074 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.534689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.534689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:32 crc kubenswrapper[4858]: E1122 07:11:32.534874 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:32 crc kubenswrapper[4858]: E1122 07:11:32.534802 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.607214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.607726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.607813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.607893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.607956 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.710579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.711009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.711080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.711154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.711215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.813563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.813602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.813613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.813628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.813637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.916195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.916237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.916248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.916259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4858]: I1122 07:11:32.916268 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.019641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.020106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.020219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.020561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.020827 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.123625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.123663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.123676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.123695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.123708 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.226147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.226181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.226192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.226206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.226218 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.328425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.328465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.328475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.328492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.328510 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.430616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.430650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.430660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.430678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.430689 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.533180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.533481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.533582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.533669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.533740 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.535522 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.535523 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:33 crc kubenswrapper[4858]: E1122 07:11:33.535810 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:33 crc kubenswrapper[4858]: E1122 07:11:33.535653 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.636521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.636560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.636594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.636610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.636620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.739348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.739385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.739396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.739411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.739420 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.841866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.841926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.841935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.841950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.841962 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.944128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.944178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.944186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.944200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4858]: I1122 07:11:33.944209 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.047175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.047231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.047243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.047261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.047275 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.149788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.149850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.149861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.149878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.149888 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.166408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.166455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.166465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.166483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.166499 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.179157 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:34Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.182716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.182745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.182754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.182768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.182777 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.194505 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:34Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.198139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.198162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.198170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.198202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.198211 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.209974 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:34Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.213974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.214019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.214035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.214056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.214074 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.227361 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:34Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.231133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.231170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.231184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.231201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.231213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.244693 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:34Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.244861 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.252396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.252447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.252459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.252476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.252488 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.354845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.354882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.354891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.354905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.354913 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.457552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.457587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.457596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.457610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.457620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.535064 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.535161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.535217 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:34 crc kubenswrapper[4858]: E1122 07:11:34.535296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.560730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.560800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.560818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.560839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.560855 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.664456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.664512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.664526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.664547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.664567 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.768784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.768842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.768858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.768884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.768905 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.872806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.872895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.872906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.872925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.872955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.975103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.975141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.975150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.975165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4858]: I1122 07:11:34.975175 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.079423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.079460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.079473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.079489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.079501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.182089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.182158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.182176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.182198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.182214 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.284455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.284497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.284506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.284520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.284531 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.386773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.386810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.386820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.386834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.386843 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.489773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.489815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.489830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.489853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.489868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.535813 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.535902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:35 crc kubenswrapper[4858]: E1122 07:11:35.536030 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:35 crc kubenswrapper[4858]: E1122 07:11:35.536520 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.592624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.592670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.592683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.592710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.592727 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.695792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.695852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.695865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.695891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.695907 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.798060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.798098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.798107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.798123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.798133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.900521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.900585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.900600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.900616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4858]: I1122 07:11:35.900627 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.003496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.003552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.003564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.003589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.003609 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.106976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.107039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.107056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.107074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.107084 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.209956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.210035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.210050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.210072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.210089 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.266485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:36 crc kubenswrapper[4858]: E1122 07:11:36.266814 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:36 crc kubenswrapper[4858]: E1122 07:11:36.266976 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:44.266943224 +0000 UTC m=+66.108366390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.312708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.312821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.312845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.312896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.312914 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.415035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.415081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.415092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.415110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.415120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.517701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.517766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.517779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.517807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.517822 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.535393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.535415 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:36 crc kubenswrapper[4858]: E1122 07:11:36.535547 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:36 crc kubenswrapper[4858]: E1122 07:11:36.535650 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.620293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.620352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.620365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.620386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.620400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.723088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.723161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.723175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.723199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.723224 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.826173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.826208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.826215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.826230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.826239 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.928878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.928920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.928954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.928972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4858]: I1122 07:11:36.928983 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.031778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.031886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.031898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.031910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.031919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.133953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.133990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.133999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.134014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.134022 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.236706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.236770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.236779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.236797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.236809 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.339783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.339832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.339843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.339860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.339871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.442028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.442062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.442071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.442085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.442094 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.535568 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.535658 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:37 crc kubenswrapper[4858]: E1122 07:11:37.535693 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:37 crc kubenswrapper[4858]: E1122 07:11:37.535780 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.545199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.545638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.545853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.546045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.546396 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.649288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.649589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.649676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.649771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.649833 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.752146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.752702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.752791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.752869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.752941 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.855764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.856030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.856119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.856203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.856287 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.958981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.959042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.959053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.959071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4858]: I1122 07:11:37.959082 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.061597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.061626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.061634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.061648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.061657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.163911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.163981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.164004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.164027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.164044 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.266801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.266855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.266871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.266894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.266906 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.369155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.369200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.369209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.369223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.369232 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.471377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.471436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.471451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.471472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.471488 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.535231 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.535255 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:38 crc kubenswrapper[4858]: E1122 07:11:38.535408 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:38 crc kubenswrapper[4858]: E1122 07:11:38.535479 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.574140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.574229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.574241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.574437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.574467 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.677023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.677113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.677132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.677160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.677179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.779470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.779519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.779529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.779546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.779557 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.882190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.882269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.882282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.882300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.882368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.985551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.985692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.985712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.985740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4858]: I1122 07:11:38.985758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.088069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.088121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.088139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.088162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.088178 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.190878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.190919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.190930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.190945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.190961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.294091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.294124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.294132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.294145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.294154 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.396827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.396872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.396884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.396903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.396919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.498814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.499374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.499471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.499553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.499645 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.534904 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:39 crc kubenswrapper[4858]: E1122 07:11:39.535033 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.534904 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:39 crc kubenswrapper[4858]: E1122 07:11:39.535121 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.548300 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.561833 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.571489 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.584456 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.596606 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.602607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.602668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.602680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.602696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.602727 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.608055 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.639310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.658182 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.671464 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.683887 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.695100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.704941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.704987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.704996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.705010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.705020 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.708658 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.720694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.738920 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a73c8e5f71f09e64c7ac34cbc0e48d2d609456a1316ce894d374f50542ca8883\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"47 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:28.387770 6047 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:28.387783 6047 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:11:28.387787 6047 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:28.387788 6047 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 07:11:28.387812 6047 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 07:11:28.387793 6047 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387836 6047 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:28.387896 6047 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:11:28.388412 6047 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:28.388456 6047 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:28.388475 6047 factory.go:656] Stopping watch factory\\\\nI1122 07:11:28.388488 6047 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:28.388511 6047 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"-resolver-ttxk5\\\\nI1122 07:11:30.113927 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qkh9t after 0 failed attempt(s)\\\\nI1122 07:11:30.113950 6344 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qkh9t\\\\nI1122 07:11:30.113945 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1122 07:11:30.113833 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbjb2 after 0 failed attempt(s)\\\\nI1122 07:11:30.113980 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113985 6344 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbjb2\\\\nI1122 07:11:30.113989 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113746 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:11:30.114002 6344 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:11:30.114016 6344 obj_retry.go:386] Retry successful for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.750041 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.762273 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.775977 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.807001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.807069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.807081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.807095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.807105 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.909902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.909939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.909952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.909968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4858]: I1122 07:11:39.909980 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.011989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.012045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.012059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.012076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.012525 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.114643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.114699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.114711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.114727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.114736 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.217020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.217087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.217099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.217144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.217157 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.320034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.320084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.320093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.320108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.320117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.423004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.423049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.423062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.423078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.423092 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.525212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.525279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.525288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.525303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.525311 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.535462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.535481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:40 crc kubenswrapper[4858]: E1122 07:11:40.535578 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:40 crc kubenswrapper[4858]: E1122 07:11:40.535683 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.627657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.627703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.627712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.627726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.627737 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.729722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.729762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.729771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.729786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.729796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.833946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.834249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.834336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.834360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.834373 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.936258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.936336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.936348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.936373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4858]: I1122 07:11:40.936405 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.039128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.039160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.039169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.039183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.039191 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.142021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.142064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.142074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.142089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.142100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.244590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.244643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.244660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.244679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.244690 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.348058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.348506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.348585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.348707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.348809 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.452205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.452241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.452250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.452268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.452279 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.535445 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.535552 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:41 crc kubenswrapper[4858]: E1122 07:11:41.535655 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:41 crc kubenswrapper[4858]: E1122 07:11:41.535733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.554308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.554370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.554381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.554399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.554410 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.660444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.660919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.661014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.661090 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.661748 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.764736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.764770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.764779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.764791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.764801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.867098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.867150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.867161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.867180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.867192 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.969961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.970227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.970357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.970443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4858]: I1122 07:11:41.970590 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.073940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.074420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.074544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.074943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.075156 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.178910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.179437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.179553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.179662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.179753 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.283287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.283413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.283439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.283474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.283498 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.387016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.387071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.387081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.387099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.387113 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.490018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.490070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.490084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.490102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.490112 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.535399 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.535431 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:42 crc kubenswrapper[4858]: E1122 07:11:42.535608 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:42 crc kubenswrapper[4858]: E1122 07:11:42.535756 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.592969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.593015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.593025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.593039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.593047 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.695618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.695657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.695667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.695684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.695695 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.798252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.798304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.798331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.798348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.798358 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.900660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.900696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.900708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.900726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4858]: I1122 07:11:42.900737 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.003189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.003228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.003236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.003253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.003261 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.106613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.106699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.106711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.106736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.106752 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.209067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.209134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.209154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.209178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.209191 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.311529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.311561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.311569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.311583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.311592 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.414341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.414385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.414396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.414410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.414418 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.517495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.517551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.517559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.517572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.517601 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.535069 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:43 crc kubenswrapper[4858]: E1122 07:11:43.535223 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.535261 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:43 crc kubenswrapper[4858]: E1122 07:11:43.535376 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.621130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.621184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.621193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.621212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.621222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.724467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.724524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.724549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.724572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.724583 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.826867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.826915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.826926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.826945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.826955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.929774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.929811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.929820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.929836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4858]: I1122 07:11:43.929847 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.032992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.033374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.033489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.033696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.033917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.137305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.137621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.137724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.137824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.137910 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.240101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.240134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.240145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.240160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.240170 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.342798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.342844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.342858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.342876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.342889 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.354225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.354507 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.354603 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:12:00.35458347 +0000 UTC m=+82.196006476 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.445208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.445494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.445624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.445733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.445822 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.464162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.464203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.464214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.464230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.464239 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.477946 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.481747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.481786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.481797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.481813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.481823 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.495902 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.500365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.500404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.500415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.500431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.500440 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.516367 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.520763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.520801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.520810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.520824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.520833 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.534412 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.534628 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.535069 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.534644 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.535212 4858 scope.go:117] "RemoveContainer" containerID="297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a" Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.535252 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.538866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.538887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.538897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.538911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.538921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.550504 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.551038 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: E1122 07:11:44.551179 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.553146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.553178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.553187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.553203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.553213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.562684 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.575635 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.592644 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.604724 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.619745 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.631807 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.646591 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.662276 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.662583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.662632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.662646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.662666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.662678 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.674868 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.696884 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"-resolver-ttxk5\\\\nI1122 07:11:30.113927 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qkh9t after 0 failed attempt(s)\\\\nI1122 07:11:30.113950 6344 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qkh9t\\\\nI1122 07:11:30.113945 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1122 07:11:30.113833 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbjb2 after 0 failed attempt(s)\\\\nI1122 07:11:30.113980 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113985 6344 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbjb2\\\\nI1122 07:11:30.113989 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113746 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:11:30.114002 6344 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:11:30.114016 6344 obj_retry.go:386] Retry successful for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.707632 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.718845 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.730423 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.742447 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.754002 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.763473 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:44Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.764469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.764502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.764514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.764530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.764542 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.867456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.867502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.867515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.867534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.867546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.952417 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/1.log" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.969462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.969498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.969509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.969527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4858]: I1122 07:11:44.969537 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.072045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.072113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.072126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.072151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.072166 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.174789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.174825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.174837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.174855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.174867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.266625 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.266773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.266800 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:12:17.266780053 +0000 UTC m=+99.108203069 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.266827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.266896 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.266896 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.266945 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:12:17.266934668 +0000 UTC m=+99.108357674 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.266960 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:12:17.266955049 +0000 UTC m=+99.108378055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.277017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.277050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.277061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.277078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.277090 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.327671 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.368015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.368292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.368244 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.368658 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.368785 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.368927 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:12:17.368912972 +0000 UTC m=+99.210335978 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.368459 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.369478 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.369558 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.369668 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:12:17.369656826 +0000 UTC m=+99.211079832 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.379910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.379945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.379957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.379973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.379984 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.482611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.482975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.483101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.483214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.483296 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.537589 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.537916 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.538149 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:45 crc kubenswrapper[4858]: E1122 07:11:45.538299 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.585374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.585407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.585416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.585430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.585439 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.687551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.687589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.687608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.687625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.687634 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.789833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.789924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.789949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.789966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.789980 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.892736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.892767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.892777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.892793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.892803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.968721 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/1.log" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.971695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c"} Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.972359 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.986438 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.996475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.996528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.996539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.996557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4858]: I1122 07:11:45.996570 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.000049 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.020634 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"-resolver-ttxk5\\\\nI1122 07:11:30.113927 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qkh9t after 0 failed attempt(s)\\\\nI1122 07:11:30.113950 6344 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qkh9t\\\\nI1122 07:11:30.113945 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1122 07:11:30.113833 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbjb2 after 0 failed attempt(s)\\\\nI1122 07:11:30.113980 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113985 6344 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbjb2\\\\nI1122 07:11:30.113989 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113746 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:11:30.114002 6344 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:11:30.114016 6344 obj_retry.go:386] Retry successful for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.034334 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.050077 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.062824 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.078924 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.093464 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.098395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.098433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.098445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.098462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.098475 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.102771 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.114473 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.128461 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.140287 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.154186 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.167120 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.180690 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.193573 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.200341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.200369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.200381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.200396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.200406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.203032 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.302752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.302790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.302800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.302816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.302827 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.405527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.405559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.405569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.405581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.405590 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.508286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.508354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.508368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.508385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.508397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.534821 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.534856 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:46 crc kubenswrapper[4858]: E1122 07:11:46.534958 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:46 crc kubenswrapper[4858]: E1122 07:11:46.535112 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.611683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.611757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.611780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.611808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.611828 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.714766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.714809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.714817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.714830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.714839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.817568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.817626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.817635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.817650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.817659 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.920092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.920451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.920549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.920639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.920779 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.976275 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/2.log" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.977493 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/1.log" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.980404 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c" exitCode=1 Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.980489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c"} Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.980665 4858 scope.go:117] "RemoveContainer" containerID="297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.981107 4858 scope.go:117] "RemoveContainer" containerID="8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c" Nov 22 07:11:46 crc kubenswrapper[4858]: E1122 07:11:46.981287 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:11:46 crc kubenswrapper[4858]: I1122 07:11:46.999095 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.010365 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.023209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.023240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.023249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.023261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.023270 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.029508 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297c36ad5bf362ff0606c3ea889ddc2e9d6b28fdb798233cddb3c7dfcffc5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"-resolver-ttxk5\\\\nI1122 07:11:30.113927 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qkh9t after 0 failed attempt(s)\\\\nI1122 07:11:30.113950 6344 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-qkh9t\\\\nI1122 07:11:30.113945 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1122 07:11:30.113833 6344 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-zbjb2 after 0 failed attempt(s)\\\\nI1122 07:11:30.113980 6344 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113985 6344 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-zbjb2\\\\nI1122 07:11:30.113989 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1122 07:11:30.113746 6344 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:11:30.114002 6344 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:11:30.114016 6344 obj_retry.go:386] Retry successful for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.039961 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.053042 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.064856 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.078420 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.091756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.101638 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.114990 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.124966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.125010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.125022 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.125039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.125051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.133764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.145446 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.160491 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.172182 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.185085 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.196884 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.206819 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:47Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.227752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.227990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.228091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.228177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.228269 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.331228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.331271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.331279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.331293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.331302 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.433472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.433510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.433521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.433540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.433551 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.534645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.534745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:47 crc kubenswrapper[4858]: E1122 07:11:47.534864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:47 crc kubenswrapper[4858]: E1122 07:11:47.535065 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.536199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.536230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.536239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.536253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.536261 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.639248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.639301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.639315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.639347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.639359 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.741707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.741745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.741753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.741774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.741783 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.843896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.843935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.843944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.843960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.843970 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.946091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.946136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.946147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.946166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.946179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.985379 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/2.log" Nov 22 07:11:47 crc kubenswrapper[4858]: I1122 07:11:47.988726 4858 scope.go:117] "RemoveContainer" containerID="8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c" Nov 22 07:11:47 crc kubenswrapper[4858]: E1122 07:11:47.988870 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.008414 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.023710 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.035421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.046135 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.048628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.048657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.048669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.048686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.048698 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.059715 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.072035 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.082788 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.096083 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.110699 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.130262 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.140733 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.150681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.150743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.150757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.150784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.150801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.156588 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.170620 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.183305 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.195196 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.205005 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.219239 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.253200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.253239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.253249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.253263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.253272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.355929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.355992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.356003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.356019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.356032 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.458237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.458271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.458279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.458291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.458300 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.534967 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.534988 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:48 crc kubenswrapper[4858]: E1122 07:11:48.535120 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:48 crc kubenswrapper[4858]: E1122 07:11:48.535397 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.543179 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.560376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.560490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.560499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.560515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.560524 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.662660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.662729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.662738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.662755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.662764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.765005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.765048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.765058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.765074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.765087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.867190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.867232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.867241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.867256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.867266 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.969154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.969195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.969203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.969216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4858]: I1122 07:11:48.969225 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.071790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.071855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.071863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.071878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.071887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.174172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.174206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.174214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.174227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.174235 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.276654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.276711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.276728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.276751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.276769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.379434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.379474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.379484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.379501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.379511 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.482177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.482208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.482215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.482230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.482241 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.535005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.535070 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:49 crc kubenswrapper[4858]: E1122 07:11:49.535150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:49 crc kubenswrapper[4858]: E1122 07:11:49.536071 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.546399 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.567243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.581227 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.584974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.585045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.585056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.585070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.585079 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.592045 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.604206 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.616222 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.626031 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.636157 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.648676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.659007 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.674203 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.688266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.688334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.688348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.688390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.688409 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.689080 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.700786 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.717305 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.741530 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.751912 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.762278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.776829 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:49Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.790387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.790460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.790470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.790486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.790495 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.893080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.893120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.893131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.893148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.893159 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.995258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.995297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.995307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.995345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4858]: I1122 07:11:49.995357 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.097945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.097982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.097993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.098008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.098017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.200350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.200402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.200410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.200426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.200435 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.302841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.302883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.302893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.302909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.302923 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.405037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.405068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.405080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.405094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.405104 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.507785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.507827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.507837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.507852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.507863 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.534789 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.534869 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:50 crc kubenswrapper[4858]: E1122 07:11:50.534914 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:50 crc kubenswrapper[4858]: E1122 07:11:50.535017 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.609889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.609925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.609936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.609951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.609963 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.712369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.712424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.712437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.712456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.712467 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.814783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.814823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.814832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.814845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.814857 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.917892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.917943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.917955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.917972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4858]: I1122 07:11:50.917991 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.020505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.020549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.020561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.020580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.020593 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.122934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.122986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.122998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.123016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.123028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.225030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.225079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.225091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.225109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.225121 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.327276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.327336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.327348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.327364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.327375 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.433237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.433305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.433334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.433355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.433366 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.534769 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:51 crc kubenswrapper[4858]: E1122 07:11:51.534885 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.535177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:51 crc kubenswrapper[4858]: E1122 07:11:51.535246 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.536449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.536471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.536479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.536490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.536499 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.638922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.638957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.638967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.638982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.638992 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.742006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.742045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.742056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.742095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.742111 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.843768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.843812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.843823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.843840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.843851 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.947349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.947380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.947388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.947401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4858]: I1122 07:11:51.947410 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.050392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.050434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.050445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.050459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.050469 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.154521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.154561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.154572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.154591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.154604 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.258411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.258468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.258480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.258501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.258512 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.360808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.360852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.360864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.360881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.360891 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.463697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.463732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.463741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.463754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.463764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.534655 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.534703 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:52 crc kubenswrapper[4858]: E1122 07:11:52.534788 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:52 crc kubenswrapper[4858]: E1122 07:11:52.534939 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.565500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.565536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.565546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.565560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.565569 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.667488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.667530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.667596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.667615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.667626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.769727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.769757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.769767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.769779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.769790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.872220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.872287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.872299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.872380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.872390 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.974969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.975006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.975016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.975035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4858]: I1122 07:11:52.975047 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.077208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.077246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.077256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.077271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.077281 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.179793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.179827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.179834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.179847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.179856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.282511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.282566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.282581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.282601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.282620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.385431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.385470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.385480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.385495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.385507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.487517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.487559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.487568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.487588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.487597 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.534999 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.535080 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:53 crc kubenswrapper[4858]: E1122 07:11:53.535183 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:53 crc kubenswrapper[4858]: E1122 07:11:53.535269 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.590884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.590929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.590942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.590958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.590970 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.693690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.693736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.693747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.693782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.693793 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.795890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.795923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.795932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.795945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.795975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.898755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.899063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.899076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.899093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4858]: I1122 07:11:53.899106 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.001569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.001610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.001620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.001638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.001647 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.103537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.103580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.103594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.103615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.103630 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.206520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.206590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.206601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.206616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.206628 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.309158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.309197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.309210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.309228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.309240 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.411569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.411598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.411609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.411627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.411639 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.514429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.514461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.514470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.514485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.514495 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.535447 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.535598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.536023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.536092 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.564017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.564052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.564062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.564079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.564090 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.578090 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:54Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.581439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.581463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.581471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.581483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.581495 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.592468 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:54Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.596161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.596188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.596198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.596212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.596223 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.607999 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:54Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.611741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.611764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.611773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.611844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.611854 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.622837 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:54Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.626023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.626047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.626055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.626067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.626075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.636941 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:54Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:54 crc kubenswrapper[4858]: E1122 07:11:54.637049 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.638540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.638556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.638564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.638576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.638586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.740814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.740840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.740848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.740860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.740868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.842924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.842955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.842964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.842977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.842986 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.944721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.944752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.944762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.944776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4858]: I1122 07:11:54.944786 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.047310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.047367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.047378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.047394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.047406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.149556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.149588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.149599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.149614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.149624 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.252095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.252120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.252128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.252139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.252148 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.354616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.354651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.354659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.354676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.354685 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.457344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.457376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.457384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.457397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.457405 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.535541 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:55 crc kubenswrapper[4858]: E1122 07:11:55.535684 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.535933 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:55 crc kubenswrapper[4858]: E1122 07:11:55.535996 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.560199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.560247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.560263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.560429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.560442 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.662862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.662891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.662917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.662931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.662939 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.765259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.765307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.765347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.765361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.765369 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.867863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.867896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.867904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.867917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.867926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.970541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.970581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.970589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.970603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4858]: I1122 07:11:55.970614 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.072758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.072800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.072811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.072825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.072834 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.175386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.175671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.175888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.176066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.176256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.279309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.279928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.280000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.280075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.280144 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.382688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.382946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.383016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.383089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.383149 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.485631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.485974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.486055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.486124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.486185 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.534872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.534920 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:56 crc kubenswrapper[4858]: E1122 07:11:56.535023 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:56 crc kubenswrapper[4858]: E1122 07:11:56.535175 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.589125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.589168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.589182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.589202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.589226 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.691650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.691684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.691692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.691705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.691714 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.794793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.795220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.795392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.795524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.795641 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.899234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.899287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.899308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.899407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4858]: I1122 07:11:56.899419 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.001379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.001427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.001448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.001465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.001475 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.104991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.105265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.105399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.105534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.105636 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.208221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.208527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.208632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.208727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.208809 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.311142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.311699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.311777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.311887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.311953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.414050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.414084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.414095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.414110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.414120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.516967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.517003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.517014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.517029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.517043 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.534947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.534970 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:57 crc kubenswrapper[4858]: E1122 07:11:57.535080 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:57 crc kubenswrapper[4858]: E1122 07:11:57.535171 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.619170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.619212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.619220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.619235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.619245 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.721829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.722118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.722191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.722254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.722315 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.825013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.825055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.825065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.825083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.825102 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.927756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.927800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.927811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.927828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4858]: I1122 07:11:57.927840 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.030658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.030703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.030714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.030728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.030739 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.132876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.132931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.132944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.132961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.132974 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.235570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.235618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.235631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.235646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.235657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.338062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.338105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.338123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.338173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.338192 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.440500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.440543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.440551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.440565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.440574 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.535027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.535102 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:58 crc kubenswrapper[4858]: E1122 07:11:58.535190 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:11:58 crc kubenswrapper[4858]: E1122 07:11:58.535262 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.542854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.542893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.542902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.542915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.542925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.645490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.645537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.645549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.645586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.645600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.747724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.747763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.747772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.747786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.747795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.850342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.850382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.850391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.850403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.850412 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.952478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.952553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.952591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.952609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4858]: I1122 07:11:58.952619 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.055450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.055502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.055517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.055537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.055550 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.157905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.157973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.157985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.158032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.158048 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.261249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.261293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.261305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.261340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.261353 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.363882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.363940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.363951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.363970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.363984 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.467036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.467082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.467094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.467121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.467133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.535254 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.535341 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:59 crc kubenswrapper[4858]: E1122 07:11:59.535456 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:59 crc kubenswrapper[4858]: E1122 07:11:59.535531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.551135 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.566108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.570310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.570369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.570378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.570391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.570400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.579454 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.592016 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.605400 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.619129 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.632095 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.643305 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.656839 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.671818 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.675274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.675344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.675357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.675375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.675388 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.685698 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.700910 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.717065 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.730661 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.744341 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.762177 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.772854 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.778140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.778395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.778484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.778567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.778640 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.788055 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.881928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.881962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.881981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.881999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.882009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.984695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.984734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.984744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.984758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4858]: I1122 07:11:59.984767 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.087057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.087092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.087111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.087128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.087140 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.189604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.189645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.189655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.189674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.189687 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.292418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.292456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.292471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.292488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.292500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.395089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.395125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.395136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.395151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.395162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.416578 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:00 crc kubenswrapper[4858]: E1122 07:12:00.416698 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:12:00 crc kubenswrapper[4858]: E1122 07:12:00.416748 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:12:32.416734688 +0000 UTC m=+114.258157694 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.497221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.497268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.497285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.497305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.497338 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.535008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.535060 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:00 crc kubenswrapper[4858]: E1122 07:12:00.535126 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:00 crc kubenswrapper[4858]: E1122 07:12:00.535201 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.599431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.599473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.599484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.599503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.599516 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.702183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.702225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.702238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.702254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.702265 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.804647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.804687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.804703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.804732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.804755 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.907956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.908031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.908046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.908067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4858]: I1122 07:12:00.908081 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.010570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.010603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.010610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.010623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.010632 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.112969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.113005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.113014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.113027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.113035 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.214994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.215035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.215046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.215061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.215073 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.317486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.317525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.317540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.317555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.317566 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.420255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.420303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.420549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.420574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.420587 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.522655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.522687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.522696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.522708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.522716 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.534837 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:01 crc kubenswrapper[4858]: E1122 07:12:01.534964 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.534837 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:01 crc kubenswrapper[4858]: E1122 07:12:01.535201 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.625237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.625285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.625298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.625331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.625346 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.727842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.728480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.728508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.728525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.728534 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.831168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.831209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.831217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.831232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.831241 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.933408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.933454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.933467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.933487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4858]: I1122 07:12:01.933496 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.036000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.036044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.036052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.036065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.036076 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.138300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.138364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.138381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.138400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.138412 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.241228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.241281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.241293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.241311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.241348 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.343542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.343594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.343606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.343623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.343636 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.448049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.448099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.448110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.448127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.448138 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.534906 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.534922 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:02 crc kubenswrapper[4858]: E1122 07:12:02.535312 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:02 crc kubenswrapper[4858]: E1122 07:12:02.535405 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.536918 4858 scope.go:117] "RemoveContainer" containerID="8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c" Nov 22 07:12:02 crc kubenswrapper[4858]: E1122 07:12:02.537219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.550812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.550865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.550879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.550895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.550908 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.653789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.653868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.653903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.653921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.653933 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.756684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.756731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.756742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.756760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.756775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.858831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.858880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.858889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.858904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.858916 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.963880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.963937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.963949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.963964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4858]: I1122 07:12:02.963975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.066512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.066553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.066565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.066580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.066589 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.169299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.169350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.169364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.169379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.169387 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.275432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.275493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.275507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.275527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.275543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.377946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.378183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.378280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.378367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.378440 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.480360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.480393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.480401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.480414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.480424 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.536521 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.536674 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:03 crc kubenswrapper[4858]: E1122 07:12:03.536790 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:03 crc kubenswrapper[4858]: E1122 07:12:03.537157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.582556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.582617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.582633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.582659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.582674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.684697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.684727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.684735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.684747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.684756 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.786943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.786986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.786994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.787007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.787019 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.889536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.889586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.889599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.889622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.889639 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.992256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.992290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.992300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.992332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4858]: I1122 07:12:03.992343 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.095085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.095116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.095124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.095136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.095145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.196933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.196976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.196988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.197006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.197018 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.299375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.299418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.299430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.299448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.299460 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.401589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.401638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.401649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.401667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.401678 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.503602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.503637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.503646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.503661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.503670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.535138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:04 crc kubenswrapper[4858]: E1122 07:12:04.535251 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.535138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:04 crc kubenswrapper[4858]: E1122 07:12:04.535435 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.605762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.605810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.605819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.605832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.605840 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.708083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.708135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.708150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.708166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.708181 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.810823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.811087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.811190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.811253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.811339 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.913920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.913959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.913969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.913984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4858]: I1122 07:12:04.913996 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.016928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.016975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.016986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.017001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.017010 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.035089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.035128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.035138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.035153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.035164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.049064 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.054255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.054303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.054328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.054350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.054365 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.068485 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.072496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.072651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.072715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.072839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.072923 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.085786 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.096077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.096372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.096487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.096592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.096677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.113290 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.119463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.119517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.119533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.119551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.119561 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.131426 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.131554 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.133368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.133606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.133619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.133643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.133655 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.236238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.236503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.236601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.236677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.236741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.339816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.340288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.340406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.340706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.340795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.443441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.443475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.443486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.443501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.443511 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.534937 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.535099 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.534937 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:05 crc kubenswrapper[4858]: E1122 07:12:05.535193 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.545984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.546028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.546041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.546058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.546070 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.649012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.649067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.649080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.649099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.649111 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.752805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.752860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.752873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.752895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.752912 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.855859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.855896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.855921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.855939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.855949 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.958797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.958877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.958892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.958907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4858]: I1122 07:12:05.958917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.060885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.060921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.060932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.060947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.060957 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.163272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.163307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.163336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.163353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.163367 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.266595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.266635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.266645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.266659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.266668 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.370375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.370417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.370425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.370438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.370447 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.472726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.472776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.472787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.472802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.472813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.535287 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.535379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:06 crc kubenswrapper[4858]: E1122 07:12:06.535434 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:06 crc kubenswrapper[4858]: E1122 07:12:06.535510 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.574913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.574958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.574969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.574984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.574993 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.677643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.677711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.677745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.677773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.677794 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.780590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.780642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.780655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.780674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.780688 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.882878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.882951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.882973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.882991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.883006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.985312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.985375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.985390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.985412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4858]: I1122 07:12:06.985428 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.046039 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/0.log" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.046113 4858 generic.go:334] "Generic (PLEG): container finished" podID="a6492476-649f-4291-81c3-e6f5a6398b70" containerID="63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce" exitCode=1 Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.046153 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerDied","Data":"63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.046575 4858 scope.go:117] "RemoveContainer" containerID="63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.061428 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.073127 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.083991 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.087425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.087488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.087504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.087520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.087532 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.097252 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.108912 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.125247 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.142680 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.157523 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.170948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.188763 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.190157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.190207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.190218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.190241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.190256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.207933 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.222608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.237525 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.252965 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.272929 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.289247 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.293525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.293615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.293630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.293660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.293673 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.303306 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.315071 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.396584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.396627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.396645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.396661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.396710 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.499118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.499171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.499180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.499194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.499205 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.535259 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.535269 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:07 crc kubenswrapper[4858]: E1122 07:12:07.535550 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:07 crc kubenswrapper[4858]: E1122 07:12:07.535656 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.601758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.601810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.601821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.601838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.601852 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.705773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.705824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.705835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.705855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.705867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.807658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.807691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.807699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.807713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.807723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.910550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.910591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.910606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.910622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4858]: I1122 07:12:07.910631 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.012563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.012616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.012629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.012644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.012653 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.051708 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/0.log" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.051785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerStarted","Data":"ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.067366 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.084208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.095771 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.115645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.115698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.115709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.115726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.115739 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.117997 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.138406 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.155235 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.186363 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.201818 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.218916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.218970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.218979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.218997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.219011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.219867 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.234799 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.247694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.262832 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.277725 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.296703 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.310837 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.321667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.321711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.321721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.321743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.321760 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.325966 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.340191 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.352345 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.425563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.425637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.425651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.425673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.425685 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.528930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.528995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.529006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.529026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.529038 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.535262 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.535341 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:08 crc kubenswrapper[4858]: E1122 07:12:08.535463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:08 crc kubenswrapper[4858]: E1122 07:12:08.535626 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.632812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.632869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.632881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.632903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.632919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.737011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.737057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.737067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.737083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.737093 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.840156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.840211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.840225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.840245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.840258 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.943587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.943672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.943688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.943734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4858]: I1122 07:12:08.943749 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.047300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.047392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.047410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.047435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.047450 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.149967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.150027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.150038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.150058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.150069 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.252102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.252153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.252166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.252184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.252199 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.355536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.355589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.355600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.355616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.355631 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.458879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.458937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.458950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.458971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.458983 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.535150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:09 crc kubenswrapper[4858]: E1122 07:12:09.535343 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.535411 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:09 crc kubenswrapper[4858]: E1122 07:12:09.535620 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.560140 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.561512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.561575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.561585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.561603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.561612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.571830 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.588983 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.606471 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.628419 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.643862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.658387 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.663804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.663851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.663862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.663880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.663890 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.676229 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.694086 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.711883 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.731304 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.745782 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.759083 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.766549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.766574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.766581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.766593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.766604 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.772656 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.784925 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.795768 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.807800 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.823116 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.868258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.868284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.868291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.868305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.868313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.970441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.970479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.970490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.970506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4858]: I1122 07:12:09.970519 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.073344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.073382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.073393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.073407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.073420 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.176076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.176135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.176150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.176169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.176180 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.279013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.279077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.279088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.279104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.279116 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.381597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.381639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.381648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.381662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.381670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.485115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.485163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.485173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.485190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.485201 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.535583 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.535590 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:10 crc kubenswrapper[4858]: E1122 07:12:10.535785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:10 crc kubenswrapper[4858]: E1122 07:12:10.535861 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.587285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.587353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.587365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.587380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.587393 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.689562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.689646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.689670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.689698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.689724 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.792413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.792476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.792488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.792511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.792525 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.895673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.895727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.895739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.895762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.895778 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.999283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.999357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.999368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.999385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4858]: I1122 07:12:10.999397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.102290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.102361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.102373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.102392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.102404 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.205599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.205658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.205672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.205691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.205705 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.309298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.309366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.309375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.309395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.309406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.412661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.412714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.412732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.412752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.412761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.515260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.515314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.515351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.515367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.515401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.534976 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:11 crc kubenswrapper[4858]: E1122 07:12:11.535089 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.535381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:11 crc kubenswrapper[4858]: E1122 07:12:11.535447 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.547768 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.617980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.618053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.618063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.618078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.618088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.720524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.720575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.720587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.720605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.720937 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.824015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.824065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.824076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.824094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.824104 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.926599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.926670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.926682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.926697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4858]: I1122 07:12:11.926708 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.029199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.029244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.029255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.029273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.029286 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.132293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.132344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.132354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.132367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.132378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.234778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.234816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.234829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.234845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.234856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.337817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.337848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.337874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.337930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.337978 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.440103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.440136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.440144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.440157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.440166 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.535444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.535460 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:12 crc kubenswrapper[4858]: E1122 07:12:12.535596 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:12 crc kubenswrapper[4858]: E1122 07:12:12.535719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.542025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.542075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.542086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.542105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.542115 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.644520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.644550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.644560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.644573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.644581 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.746669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.746736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.746748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.746769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.746781 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.850150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.850211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.850220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.850251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.850263 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.952600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.952634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.952643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.952655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4858]: I1122 07:12:12.952663 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.054871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.054908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.054918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.054933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.054944 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.157991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.158042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.158054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.158071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.158083 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.259911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.259946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.259956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.259972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.259982 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.362125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.362161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.362171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.362186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.362197 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.467445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.467582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.467595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.467611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.467623 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.535142 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.535418 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:13 crc kubenswrapper[4858]: E1122 07:12:13.535582 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:13 crc kubenswrapper[4858]: E1122 07:12:13.535712 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.571067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.571113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.571122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.571178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.571191 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.673636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.673667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.673675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.673688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.673697 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.776198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.776236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.776257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.776271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.776279 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.879565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.879641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.879654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.879669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.879681 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.981998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.982048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.982062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.982074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4858]: I1122 07:12:13.982084 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.084675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.084712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.084720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.084734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.084745 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.187574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.187634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.187652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.187678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.187697 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.290272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.290345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.290359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.290378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.290392 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.392730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.392769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.392781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.392795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.392805 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.495206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.495251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.495264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.495279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.495291 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.534996 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.535004 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:14 crc kubenswrapper[4858]: E1122 07:12:14.535138 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:14 crc kubenswrapper[4858]: E1122 07:12:14.535239 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.597438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.597496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.597505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.597521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.597545 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.700125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.700395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.700462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.700544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.700620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.802879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.802915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.802924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.802937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.802946 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.905135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.905446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.905544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.905684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4858]: I1122 07:12:14.905771 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.007741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.007770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.007778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.007790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.007798 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.110036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.110072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.110080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.110096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.110105 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.212653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.212688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.212703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.212718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.212729 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.314596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.314623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.314631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.314645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.314653 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.349844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.349879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.349892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.349905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.349915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.362344 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.366742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.366817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.366857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.366883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.366898 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.379973 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.384033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.384288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.384497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.384629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.384724 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.397555 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.401212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.401431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.401508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.401583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.401657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.418552 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.425180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.425216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.425226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.425242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.425255 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.439518 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8142ece0-65e2-4a75-afd0-f871d9afb049\\\",\\\"systemUUID\\\":\\\"75279d0b-50e9-4469-9fd3-3a3571789513\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.439687 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.441516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.441641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.441745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.441844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.441926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.535116 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.535519 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.535844 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:15 crc kubenswrapper[4858]: E1122 07:12:15.535988 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.545122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.545386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.545516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.545744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.545840 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.648558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.648586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.648594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.648607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.648616 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.750920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.750979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.750993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.751012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.751023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.854568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.854613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.854623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.854639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.854649 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.957574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.957877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.957959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.958025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4858]: I1122 07:12:15.958085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.060067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.060101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.060111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.060126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.060140 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.162287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.162513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.162546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.162566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.162580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.265307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.265398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.265487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.265508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.265522 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.368632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.368685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.368704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.368722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.368734 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.471163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.471249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.471269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.471295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.471375 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.534725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.534791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:16 crc kubenswrapper[4858]: E1122 07:12:16.534853 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:16 crc kubenswrapper[4858]: E1122 07:12:16.534945 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.574193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.574269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.574285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.574303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.574314 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.677032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.677083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.677095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.677111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.677122 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.779738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.779777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.779788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.779802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.779815 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.882621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.882670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.882685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.882705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.882721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.985539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.985641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.985652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.985673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:16 crc kubenswrapper[4858]: I1122 07:12:16.985690 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:16Z","lastTransitionTime":"2025-11-22T07:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.088102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.088167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.088179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.088193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.088204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.190753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.190812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.190824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.190855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.190868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.290742 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.290862 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.290837624 +0000 UTC m=+163.132260630 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.290897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.290971 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.291051 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.291078 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.291104 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.291091972 +0000 UTC m=+163.132514978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.291120 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.291112312 +0000 UTC m=+163.132535318 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.293162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.293217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.293228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.293249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.293265 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.392271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.392380 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392511 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392565 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392580 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392623 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392652 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.392627813 +0000 UTC m=+163.234050969 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392659 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392680 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.392751 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.392728486 +0000 UTC m=+163.234151512 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.396210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.396249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.396264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.396282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.396292 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.499031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.499097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.499111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.499133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.499148 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.535238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.535399 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.535516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:17 crc kubenswrapper[4858]: E1122 07:12:17.536086 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.536097 4858 scope.go:117] "RemoveContainer" containerID="8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.606478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.606518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.606528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.606575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.606589 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.709012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.709531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.709545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.709566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.709586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.812466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.812517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.812525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.812539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.812574 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.914918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.914956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.914965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.914981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:17 crc kubenswrapper[4858]: I1122 07:12:17.914991 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:17Z","lastTransitionTime":"2025-11-22T07:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.017330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.017366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.017374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.017387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.017395 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.081066 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/2.log" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.083390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.083820 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.097624 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.110081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.119377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.119422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.119433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.119449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.119460 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.129716 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.142788 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.156735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.169361 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.181773 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.195124 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.207680 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.219151 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.221971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.222006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.222017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.222036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.222048 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.230817 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.243505 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.262752 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.273762 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.290844 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8974076d-b186-4881-9f73-68399a08b885\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21cf2f14944b35d986f700de555e6abac2f645c43ec6a12789f665d33f2a5a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530cdf70dba751b5fc1820866197efb4202df3dbf7a90fc8ff81fc943fe74f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26359fa9cd09d63015438732ecba7b4c5271f1103ee19fb63dfa857e03182b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0102408edc3d74c5e35bfd93b50aba129374e757f58bce310661f730e4b51750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd42a832eecf35db34ca68f9e8358a9cd1825d114ddfb090d8c80a9d4651e5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.302866 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.315810 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.324635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.324678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.324687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.324706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.324719 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.326867 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.338042 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.426814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.426850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.426860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.426872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.426901 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.529727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.529766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.529774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.529789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.529801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.535190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.535190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:18 crc kubenswrapper[4858]: E1122 07:12:18.535309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:18 crc kubenswrapper[4858]: E1122 07:12:18.535384 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.632467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.632503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.632533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.632547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.632556 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.735749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.735827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.735842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.735874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.735892 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.838953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.838998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.839009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.839025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.839036 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.942718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.942773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.942786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.942813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:18 crc kubenswrapper[4858]: I1122 07:12:18.942829 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:18Z","lastTransitionTime":"2025-11-22T07:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.046295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.046379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.046391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.046414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.046428 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.093482 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/3.log" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.094342 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/2.log" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.096914 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" exitCode=1 Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.096979 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.097033 4858 scope.go:117] "RemoveContainer" containerID="8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.097789 4858 scope.go:117] "RemoveContainer" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" Nov 22 07:12:19 crc kubenswrapper[4858]: E1122 07:12:19.098035 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.108672 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.126997 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.138115 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.147871 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.148974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.149012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.149023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.149038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.149052 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.158616 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.170845 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.181267 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.192144 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.205732 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.220984 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.242303 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:18Z\\\",\\\"message\\\":\\\"bj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403615 6994 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403619 6994 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:12:18.403624 6994 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1122 07:12:18.403627 6994 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403634 6994 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-56l5j\\\\nI1122 07:12:18.403645 6994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-56l5j\\\\nF1122 07:12:18.403652 6994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.252809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.252945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.252981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.252993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.253065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.253085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.270370 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8974076d-b186-4881-9f73-68399a08b885\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21cf2f14944b35d986f700de555e6abac2f645c43ec6a12789f665d33f2a5a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530cdf70dba751b5fc1820866197efb4202df3dbf7a90fc8ff81fc943fe74f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26359fa9cd09d63015438732ecba7b4c5271f1103ee19fb63dfa857e03182b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0102408edc3d74c5e35bfd93b50aba129374e757f58bce310661f730e4b51750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd42a832eecf35db34ca68f9e8358a9cd1825d114ddfb090d8c80a9d4651e5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.282442 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.293555 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.302665 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.311850 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.321719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.332889 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.355419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.355446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.355454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.355467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.355476 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.457374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.457459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.457475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.457527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.457550 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.536618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.536695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:19 crc kubenswrapper[4858]: E1122 07:12:19.536799 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:19 crc kubenswrapper[4858]: E1122 07:12:19.536969 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.550501 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.560103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.560138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.560148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.560164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.560175 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.565768 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.579797 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.593444 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.608754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.621290 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.639298 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.654699 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.661899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.661937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.661946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.662004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.662021 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.667970 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.678921 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.687520 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.699667 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.711430 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.730219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8042e1e73cf037f3f0abfcb50c147aa506b96bae21336b726c500bdd277cf52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:46Z\\\",\\\"message\\\":\\\"Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1122 07:11:46.070146 6578 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1122 07:11:46.070163 6578 obj_retry.go:409] Going to retry *v1.Pod resource setup for 12 objects: [openshift-multus/multus-56l5j openshift-kube-apiserver/kube-apiserver-crc openshift-multus/network-metrics-daemon-m2bfv openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-ncp4k openshift-machine-config-operator/machine-config-daemon-qkh9t openshift-network-diagnostics/network-check-target-xd92c openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-additional-cni-plugins-zbjb2 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d]\\\\nI1122 07:11:46.070189 6578 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nF1122 07:11:46.070197 6578 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:18Z\\\",\\\"message\\\":\\\"bj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403615 6994 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403619 6994 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:12:18.403624 6994 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1122 07:12:18.403627 6994 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403634 6994 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-56l5j\\\\nI1122 07:12:18.403645 6994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-56l5j\\\\nF1122 07:12:18.403652 6994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.742360 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.764875 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8974076d-b186-4881-9f73-68399a08b885\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21cf2f14944b35d986f700de555e6abac2f645c43ec6a12789f665d33f2a5a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530cdf70dba751b5fc1820866197efb4202df3dbf7a90fc8ff81fc943fe74f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26359fa9cd09d63015438732ecba7b4c5271f1103ee19fb63dfa857e03182b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0102408edc3d74c5e35bfd93b50aba129374e757f58bce310661f730e4b51750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd42a832eecf35db34ca68f9e8358a9cd1825d114ddfb090d8c80a9d4651e5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.765476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.765504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.765515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.765536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.765546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.779581 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.793628 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.804521 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.867620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.867643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.867651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.867662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.867671 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.970715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.970768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.970780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.970798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:19 crc kubenswrapper[4858]: I1122 07:12:19.970810 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:19Z","lastTransitionTime":"2025-11-22T07:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.073515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.073553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.073564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.073578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.073588 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.103126 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/3.log" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.108480 4858 scope.go:117] "RemoveContainer" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" Nov 22 07:12:20 crc kubenswrapper[4858]: E1122 07:12:20.108685 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.123779 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e63dfe1c05bbcdbd7c31c5286b18e3d6e9c31a535622568278762e0cf505e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.142615 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.158163 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://000f5762364ada7335352f340a624501e63d99e22b4e13a3fab7ac626e1ca8d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.172298 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.176885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.176940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.176958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.176981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.176997 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.190723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14e03227-73ca-4f1f-b3e0-28a197f72b42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:18Z\\\",\\\"message\\\":\\\"bj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403615 6994 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403619 6994 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1122 07:12:18.403624 6994 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1122 07:12:18.403627 6994 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1122 07:12:18.403634 6994 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-56l5j\\\\nI1122 07:12:18.403645 6994 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-56l5j\\\\nF1122 07:12:18.403652 6994 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:12:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk6nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ncp4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.201200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668a4495-5031-4084-9b05-d5d73dd20613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxzvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m2bfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.218676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8974076d-b186-4881-9f73-68399a08b885\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21cf2f14944b35d986f700de555e6abac2f645c43ec6a12789f665d33f2a5a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530cdf70dba751b5fc1820866197efb4202df3dbf7a90fc8ff81fc943fe74f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26359fa9cd09d63015438732ecba7b4c5271f1103ee19fb63dfa857e03182b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0102408edc3d74c5e35bfd93b50aba129374e757f58bce310661f730e4b51750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd42a832eecf35db34ca68f9e8358a9cd1825d114ddfb090d8c80a9d4651e5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a261a2952dd699a67247311aca2dcdb48621a1f27bfa539c77eab2f6e7ce78fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f641f8c06c9043badbc14142b2cd06517392639e775c9ca848ee8a084dbcfa0e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39957081030a33dbd05d57e3d81f8e4037795dd8602690d90ee40821b79a42ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.229720 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-56l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6492476-649f-4291-81c3-e6f5a6398b70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:06Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001\\\\n2025-11-22T07:11:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eac8b550-6ba8-49b0-bfdf-4edf4932d001 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:21Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:21Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:12:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s68f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-56l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.239089 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"381ac0be-1cf8-47a0-8263-6bd3f843b178\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ea4aedf7f0fe22358e46bdb9577560537b2f7dd166ac0113e9686ae6fdec6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1483c27f1609b179921c620968718942b6d4dc46e5787ecaf1842bcd3dc0cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f4d9cf3210aa21f108ffab9f96cc749e2842bd8c52f80bb3df4d04de147852\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c0e076f5c7792b8745ceec7bd1b57ed6d0b1d26cafefad645e56d88ac0fe45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.250637 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83ec48f9-2911-47ef-8772-1a40a9409057\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5275fb5e0c2590bdad23b4b5d9531b7708d7b9b4d4ee7c6e0cd97328e1b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cc09a8e372e9846e4ebdbae1736a7a4f0d440d6f074beb412d826bbd89d9ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b4cd4ae1a2be371467c4d30b7595bba8d75e14969a2a63721286c5470206a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa0e6d44cfd7146f58fced97423ee49a684a149438dbae49297b064f6a81005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f231f699faa2d2d373403c9d59512d79d6ba89f18219d8d7e775047bdd9d951\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:11:08Z\\\",\\\"message\\\":\\\"W1122 07:10:57.381828 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:10:57.382137 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763795457 cert, and key in /tmp/serving-cert-2545956460/serving-signer.crt, /tmp/serving-cert-2545956460/serving-signer.key\\\\nI1122 07:10:57.633043 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:10:57.640412 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:10:57.640668 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:10:57.647279 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2545956460/tls.crt::/tmp/serving-cert-2545956460/tls.key\\\\\\\"\\\\nF1122 07:11:08.116776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda9b4e6ca0d389e4c3a9cc0ef225c70cabc2b2d5339bd0c1d50e9c050de1f52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86b899ceb999d95b80f68ad2b2912e2d81a0bb51ffac2d95b0b5c11295462c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.262950 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f54c171b079f14dc509024f286d1c566115d04fb84f3bb322f9edead1dfcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53effccfdfdd8aae7d8d998686a9fb502559f1354b3d376b4a6781f91a90128e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.272515 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8b5rw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"095c751d-e5c2-4c33-9041-4bdcb32f1269\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59adf335b07e15a43afd71b11a4fccb44f7e393ef23eafe61911e4e6b52247c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86hzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8b5rw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.279354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.279436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.279453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.279491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.279523 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.281719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2564a9c2-6d7e-444f-ac11-230957b66e07\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60f37739028d014b068ca984515bc6391e9dc845b31762b4abe5bf9468dbeff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3c87beffae16dd8496bc925c05638397d19d14b0a83c3670759e948273e7a07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.293583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.305257 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ttxk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1043a8b7-9753-47c5-88da-a72e0a062eb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b629cf24a3e6c510ce8dd83b2c9b53b242cce0bd5b59aca8a5e126e26bd7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgd9m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ttxk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.315082 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879ed654f8e8668f44222afa6ba4911d8b22ee13126dc345ab1c5345fea91d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qkh9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.327958 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ea6513-de67-4c47-8329-7f922012c318\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5943f9d80624668f6e2f5855d6078add5d97c70a006bc4f1afc3b3c15875f645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98a326fba06aff357f6817ca9f68dcc9d4d9b73c39c4c56b34d2201a59e7ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://459524fe514574211d4fa8db832815c7bb2e5e21061d3af5b61905fb4e2a3d41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be849048093881e8b6fb376d3e7945e4a5feeaee9ea92ab1fbd389f06f584fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://026646c0f4edbd84d89dc404f9d628db6898cf3e4fc752cef3fa0339ca2b1406\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e82b95a88a265abe7fd3afc321c7f0ed00bc8510c34fb06b8802d08d8bf87bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://befc3d5a1a242ce47ef711ffe3148d3ef2337202ad5324cf296f61bc13d2af45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbjb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.338989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7518346-69ca-444a-bcb3-26bdab4870a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58a2dbb5b79c98ac39d26e75fc56f3847404c28537e52a6bdef48ecf2efda180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5199531cbb71b4010cdecee6a358939c9635e9881b3ae1623aaec698f6a3855f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8drbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bkm2d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.351902 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14ece65c-747a-4510-acf5-6c35d80ec1fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd83663574935703912154cbc5863f107c7114355eadeb921aa6fa4f6282aaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://249e9ee57a45c017163a85067f57e21000c881ae4d0f119374ab768e565293d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://963983ca1f1af86dc3a40cb52ab39738eedb1f0aab57560e4bc0fe5383cc4b80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0083ada00ab5c9dd2627175693b205538d0e078ca082fe254a4f2e77e87e0d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.382300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.382384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.382396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.382413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.382425 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.484564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.484900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.485017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.485115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.485204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.534612 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.534686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:20 crc kubenswrapper[4858]: E1122 07:12:20.534742 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:20 crc kubenswrapper[4858]: E1122 07:12:20.534792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.588134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.588528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.588662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.588862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.589017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.691828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.692242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.692451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.692633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.692871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.795337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.795368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.795378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.795393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.795402 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.898115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.898515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.899428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.899518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:20 crc kubenswrapper[4858]: I1122 07:12:20.899582 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:20Z","lastTransitionTime":"2025-11-22T07:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.002618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.003083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.003172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.003234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.003291 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.106200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.106509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.106601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.106732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.106815 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.210118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.210163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.210172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.210187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.210196 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.312953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.313033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.313046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.313062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.313073 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.415763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.415813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.415829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.415846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.415856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.518627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.518669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.518677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.518691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.518700 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.535260 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:21 crc kubenswrapper[4858]: E1122 07:12:21.535390 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.535559 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:21 crc kubenswrapper[4858]: E1122 07:12:21.535703 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.621228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.621275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.621285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.621303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.621335 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.723836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.723880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.723891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.723908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.723920 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.826213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.826248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.826256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.826268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.826277 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.929335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.929382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.929392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.929409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:21 crc kubenswrapper[4858]: I1122 07:12:21.929422 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:21Z","lastTransitionTime":"2025-11-22T07:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.031616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.031661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.031671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.031685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.031696 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.133949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.133986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.133998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.134011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.134020 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.236627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.236687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.236698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.236724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.236736 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.339258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.339338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.339356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.339372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.339382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.442341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.442379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.442388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.442404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.442421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.535483 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.535544 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:22 crc kubenswrapper[4858]: E1122 07:12:22.535633 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:22 crc kubenswrapper[4858]: E1122 07:12:22.535780 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.545016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.545659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.545670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.545688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.545698 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.649000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.649036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.649047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.649062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.649073 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.751155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.751200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.751213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.751232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.751244 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.854921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.854971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.854983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.855003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.855014 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.958120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.958376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.958385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.958399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4858]: I1122 07:12:22.958408 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.060394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.060429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.060438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.060453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.060462 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.162558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.162588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.162597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.162609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.162617 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.265266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.265361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.265371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.265384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.265394 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.368211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.368267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.368287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.368343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.368363 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.471975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.472040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.472055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.472074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.472087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.534820 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:23 crc kubenswrapper[4858]: E1122 07:12:23.534961 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.535054 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:23 crc kubenswrapper[4858]: E1122 07:12:23.535210 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.575089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.575127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.575137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.575150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.575159 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.677125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.677164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.677175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.677190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.677200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.780645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.780728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.780739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.780757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.780773 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.884675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.884727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.884738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.884758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.884779 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.987549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.987590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.987606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.987621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:23 crc kubenswrapper[4858]: I1122 07:12:23.987631 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:23Z","lastTransitionTime":"2025-11-22T07:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.090173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.090211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.090222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.090237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.090249 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.192579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.192627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.192636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.192653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.192665 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.295469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.295519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.295538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.295561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.295581 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.400382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.400422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.400434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.400452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.400463 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.503193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.503232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.503242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.503256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.503265 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.534778 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:24 crc kubenswrapper[4858]: E1122 07:12:24.534925 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.534785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:24 crc kubenswrapper[4858]: E1122 07:12:24.535225 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.605192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.605224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.605235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.605252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.605263 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.707682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.707725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.707737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.707752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.707761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.810143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.810184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.810197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.810214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.810226 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.912764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.912804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.912817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.912836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:24 crc kubenswrapper[4858]: I1122 07:12:24.912855 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:24Z","lastTransitionTime":"2025-11-22T07:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.014810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.014843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.014856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.014871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.014881 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:25Z","lastTransitionTime":"2025-11-22T07:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.117697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.117751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.117762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.117776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.117784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:25Z","lastTransitionTime":"2025-11-22T07:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.220836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.220891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.220907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.220945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.220968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:25Z","lastTransitionTime":"2025-11-22T07:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.324064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.324143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.324172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.324198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.324212 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:25Z","lastTransitionTime":"2025-11-22T07:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.428004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.428071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.428086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.428112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.428126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:25Z","lastTransitionTime":"2025-11-22T07:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.452042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.452101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.452121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.452145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.452164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:25Z","lastTransitionTime":"2025-11-22T07:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.508516 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw"] Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.509091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.510990 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.511021 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.512279 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.514411 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.535158 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.535252 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:25 crc kubenswrapper[4858]: E1122 07:12:25.535367 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:25 crc kubenswrapper[4858]: E1122 07:12:25.535482 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.565506 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=14.565486733 podStartE2EDuration="14.565486733s" podCreationTimestamp="2025-11-22 07:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.544820778 +0000 UTC m=+107.386243784" watchObservedRunningTime="2025-11-22 07:12:25.565486733 +0000 UTC m=+107.406909739" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.573035 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.573093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.573172 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.573202 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.573277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.650261 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.650231041 podStartE2EDuration="53.650231041s" podCreationTimestamp="2025-11-22 07:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.650232281 +0000 UTC m=+107.491655287" watchObservedRunningTime="2025-11-22 07:12:25.650231041 +0000 UTC m=+107.491654047" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.665891 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-56l5j" podStartSLOduration=71.665856487 podStartE2EDuration="1m11.665856487s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.665267869 +0000 UTC m=+107.506690865" watchObservedRunningTime="2025-11-22 07:12:25.665856487 +0000 UTC m=+107.507279503" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.674438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.674484 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.674541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.675409 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.675396649 podStartE2EDuration="37.675396649s" podCreationTimestamp="2025-11-22 07:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.674393507 +0000 UTC m=+107.515816533" watchObservedRunningTime="2025-11-22 07:12:25.675396649 +0000 UTC m=+107.516819655" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.675838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.675857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.675944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.675886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.676050 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.680808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.690998 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae44df37-e29e-4ab0-ab51-2a7174f2d90f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-d4mqw\" (UID: \"ae44df37-e29e-4ab0-ab51-2a7174f2d90f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.708409 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.708390726 podStartE2EDuration="1m12.708390726s" podCreationTimestamp="2025-11-22 07:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.693808284 +0000 UTC m=+107.535231290" watchObservedRunningTime="2025-11-22 07:12:25.708390726 +0000 UTC m=+107.549813732" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.718525 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8b5rw" podStartSLOduration=71.718506857 podStartE2EDuration="1m11.718506857s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.718355492 +0000 UTC m=+107.559778498" watchObservedRunningTime="2025-11-22 07:12:25.718506857 +0000 UTC m=+107.559929863" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.743288 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.743271273 podStartE2EDuration="1m12.743271273s" podCreationTimestamp="2025-11-22 07:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.742481087 +0000 UTC m=+107.583904103" watchObservedRunningTime="2025-11-22 07:12:25.743271273 +0000 UTC m=+107.584694279" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.767298 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ttxk5" podStartSLOduration=72.767277644 podStartE2EDuration="1m12.767277644s" podCreationTimestamp="2025-11-22 07:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.766287413 +0000 UTC m=+107.607710429" watchObservedRunningTime="2025-11-22 07:12:25.767277644 +0000 UTC m=+107.608700650" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.777649 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podStartSLOduration=71.777634103 podStartE2EDuration="1m11.777634103s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.776666052 +0000 UTC m=+107.618089078" watchObservedRunningTime="2025-11-22 07:12:25.777634103 +0000 UTC m=+107.619057109" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.804498 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zbjb2" podStartSLOduration=71.804481654 podStartE2EDuration="1m11.804481654s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.794216869 +0000 UTC m=+107.635639875" watchObservedRunningTime="2025-11-22 07:12:25.804481654 +0000 UTC m=+107.645904660" Nov 22 07:12:25 crc kubenswrapper[4858]: I1122 07:12:25.828719 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" Nov 22 07:12:25 crc kubenswrapper[4858]: W1122 07:12:25.840426 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae44df37_e29e_4ab0_ab51_2a7174f2d90f.slice/crio-9976edcb8f0cd7e055ebad74e34b7e10b9f4429285eef58f84ae69eabf6225f4 WatchSource:0}: Error finding container 9976edcb8f0cd7e055ebad74e34b7e10b9f4429285eef58f84ae69eabf6225f4: Status 404 returned error can't find the container with id 9976edcb8f0cd7e055ebad74e34b7e10b9f4429285eef58f84ae69eabf6225f4 Nov 22 07:12:26 crc kubenswrapper[4858]: I1122 07:12:26.122615 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" event={"ID":"ae44df37-e29e-4ab0-ab51-2a7174f2d90f","Type":"ContainerStarted","Data":"9976edcb8f0cd7e055ebad74e34b7e10b9f4429285eef58f84ae69eabf6225f4"} Nov 22 07:12:26 crc kubenswrapper[4858]: I1122 07:12:26.534810 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:26 crc kubenswrapper[4858]: E1122 07:12:26.535067 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:26 crc kubenswrapper[4858]: I1122 07:12:26.535494 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:26 crc kubenswrapper[4858]: E1122 07:12:26.535589 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:27 crc kubenswrapper[4858]: I1122 07:12:27.127223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" event={"ID":"ae44df37-e29e-4ab0-ab51-2a7174f2d90f","Type":"ContainerStarted","Data":"f7e9403c715666f62639e65d63fd210941848f92c3cb626931c16fb535fe9b95"} Nov 22 07:12:27 crc kubenswrapper[4858]: I1122 07:12:27.139926 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bkm2d" podStartSLOduration=73.139908092 podStartE2EDuration="1m13.139908092s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:25.804599888 +0000 UTC m=+107.646022914" watchObservedRunningTime="2025-11-22 07:12:27.139908092 +0000 UTC m=+108.981331108" Nov 22 07:12:27 crc kubenswrapper[4858]: I1122 07:12:27.535495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:27 crc kubenswrapper[4858]: I1122 07:12:27.535516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:27 crc kubenswrapper[4858]: E1122 07:12:27.535695 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:27 crc kubenswrapper[4858]: E1122 07:12:27.535848 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:28 crc kubenswrapper[4858]: I1122 07:12:28.534637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:28 crc kubenswrapper[4858]: I1122 07:12:28.534745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:28 crc kubenswrapper[4858]: E1122 07:12:28.534784 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:28 crc kubenswrapper[4858]: E1122 07:12:28.534834 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:29 crc kubenswrapper[4858]: I1122 07:12:29.534836 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:29 crc kubenswrapper[4858]: I1122 07:12:29.534836 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:29 crc kubenswrapper[4858]: E1122 07:12:29.535920 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:29 crc kubenswrapper[4858]: E1122 07:12:29.536099 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:30 crc kubenswrapper[4858]: I1122 07:12:30.535951 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:30 crc kubenswrapper[4858]: E1122 07:12:30.536168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:30 crc kubenswrapper[4858]: I1122 07:12:30.535995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:30 crc kubenswrapper[4858]: E1122 07:12:30.536635 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:31 crc kubenswrapper[4858]: I1122 07:12:31.535523 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:31 crc kubenswrapper[4858]: I1122 07:12:31.535566 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:31 crc kubenswrapper[4858]: E1122 07:12:31.535924 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:31 crc kubenswrapper[4858]: E1122 07:12:31.536017 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:32 crc kubenswrapper[4858]: I1122 07:12:32.447280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:32 crc kubenswrapper[4858]: E1122 07:12:32.447603 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:12:32 crc kubenswrapper[4858]: E1122 07:12:32.447775 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs podName:668a4495-5031-4084-9b05-d5d73dd20613 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.447738829 +0000 UTC m=+178.289162015 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs") pod "network-metrics-daemon-m2bfv" (UID: "668a4495-5031-4084-9b05-d5d73dd20613") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:12:32 crc kubenswrapper[4858]: I1122 07:12:32.535391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:32 crc kubenswrapper[4858]: I1122 07:12:32.535388 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:32 crc kubenswrapper[4858]: E1122 07:12:32.535500 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:32 crc kubenswrapper[4858]: E1122 07:12:32.535561 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:33 crc kubenswrapper[4858]: I1122 07:12:33.535737 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:33 crc kubenswrapper[4858]: I1122 07:12:33.535746 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:33 crc kubenswrapper[4858]: E1122 07:12:33.536043 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:33 crc kubenswrapper[4858]: E1122 07:12:33.536150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:33 crc kubenswrapper[4858]: I1122 07:12:33.536512 4858 scope.go:117] "RemoveContainer" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" Nov 22 07:12:33 crc kubenswrapper[4858]: E1122 07:12:33.536694 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:12:34 crc kubenswrapper[4858]: I1122 07:12:34.535228 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:34 crc kubenswrapper[4858]: I1122 07:12:34.535271 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:34 crc kubenswrapper[4858]: E1122 07:12:34.535682 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:34 crc kubenswrapper[4858]: E1122 07:12:34.535831 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:35 crc kubenswrapper[4858]: I1122 07:12:35.534893 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:35 crc kubenswrapper[4858]: I1122 07:12:35.535156 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:35 crc kubenswrapper[4858]: E1122 07:12:35.536053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:35 crc kubenswrapper[4858]: E1122 07:12:35.536407 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:36 crc kubenswrapper[4858]: I1122 07:12:36.535682 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:36 crc kubenswrapper[4858]: I1122 07:12:36.535682 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:36 crc kubenswrapper[4858]: E1122 07:12:36.536015 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:36 crc kubenswrapper[4858]: E1122 07:12:36.536094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:37 crc kubenswrapper[4858]: I1122 07:12:37.535410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:37 crc kubenswrapper[4858]: I1122 07:12:37.535481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:37 crc kubenswrapper[4858]: E1122 07:12:37.535535 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:37 crc kubenswrapper[4858]: E1122 07:12:37.535608 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:38 crc kubenswrapper[4858]: I1122 07:12:38.535067 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:38 crc kubenswrapper[4858]: E1122 07:12:38.536094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:38 crc kubenswrapper[4858]: I1122 07:12:38.536379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:38 crc kubenswrapper[4858]: E1122 07:12:38.536469 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:39 crc kubenswrapper[4858]: E1122 07:12:39.467888 4858 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 22 07:12:39 crc kubenswrapper[4858]: I1122 07:12:39.535366 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:39 crc kubenswrapper[4858]: E1122 07:12:39.536680 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:39 crc kubenswrapper[4858]: I1122 07:12:39.536824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:39 crc kubenswrapper[4858]: E1122 07:12:39.536973 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:39 crc kubenswrapper[4858]: E1122 07:12:39.856501 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:40 crc kubenswrapper[4858]: I1122 07:12:40.535008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:40 crc kubenswrapper[4858]: I1122 07:12:40.535069 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:40 crc kubenswrapper[4858]: E1122 07:12:40.535270 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:40 crc kubenswrapper[4858]: E1122 07:12:40.535509 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:41 crc kubenswrapper[4858]: I1122 07:12:41.535592 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:41 crc kubenswrapper[4858]: I1122 07:12:41.535622 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:41 crc kubenswrapper[4858]: E1122 07:12:41.535739 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:41 crc kubenswrapper[4858]: E1122 07:12:41.535834 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:42 crc kubenswrapper[4858]: I1122 07:12:42.534606 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:42 crc kubenswrapper[4858]: I1122 07:12:42.534671 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:42 crc kubenswrapper[4858]: E1122 07:12:42.534733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:42 crc kubenswrapper[4858]: E1122 07:12:42.534832 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:43 crc kubenswrapper[4858]: I1122 07:12:43.535515 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:43 crc kubenswrapper[4858]: E1122 07:12:43.535672 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:43 crc kubenswrapper[4858]: I1122 07:12:43.535534 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:43 crc kubenswrapper[4858]: E1122 07:12:43.535932 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:44 crc kubenswrapper[4858]: I1122 07:12:44.534904 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:44 crc kubenswrapper[4858]: E1122 07:12:44.535054 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:44 crc kubenswrapper[4858]: I1122 07:12:44.534903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:44 crc kubenswrapper[4858]: E1122 07:12:44.535167 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:44 crc kubenswrapper[4858]: E1122 07:12:44.857730 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:45 crc kubenswrapper[4858]: I1122 07:12:45.535573 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:45 crc kubenswrapper[4858]: I1122 07:12:45.535676 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:45 crc kubenswrapper[4858]: E1122 07:12:45.535736 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:45 crc kubenswrapper[4858]: E1122 07:12:45.535789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:46 crc kubenswrapper[4858]: I1122 07:12:46.535540 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:46 crc kubenswrapper[4858]: I1122 07:12:46.535694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:46 crc kubenswrapper[4858]: E1122 07:12:46.537924 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:46 crc kubenswrapper[4858]: E1122 07:12:46.538179 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:47 crc kubenswrapper[4858]: I1122 07:12:47.535546 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:47 crc kubenswrapper[4858]: E1122 07:12:47.535710 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:47 crc kubenswrapper[4858]: I1122 07:12:47.535771 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:47 crc kubenswrapper[4858]: I1122 07:12:47.535813 4858 scope.go:117] "RemoveContainer" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" Nov 22 07:12:47 crc kubenswrapper[4858]: E1122 07:12:47.535878 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:47 crc kubenswrapper[4858]: E1122 07:12:47.536068 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ncp4k_openshift-ovn-kubernetes(14e03227-73ca-4f1f-b3e0-28a197f72b42)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" Nov 22 07:12:48 crc kubenswrapper[4858]: I1122 07:12:48.534574 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:48 crc kubenswrapper[4858]: E1122 07:12:48.534698 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:48 crc kubenswrapper[4858]: I1122 07:12:48.534756 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:48 crc kubenswrapper[4858]: E1122 07:12:48.534803 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:49 crc kubenswrapper[4858]: I1122 07:12:49.535670 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:49 crc kubenswrapper[4858]: E1122 07:12:49.536708 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:49 crc kubenswrapper[4858]: I1122 07:12:49.536724 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:49 crc kubenswrapper[4858]: E1122 07:12:49.536817 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:49 crc kubenswrapper[4858]: E1122 07:12:49.858578 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:50 crc kubenswrapper[4858]: I1122 07:12:50.534982 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:50 crc kubenswrapper[4858]: I1122 07:12:50.535040 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:50 crc kubenswrapper[4858]: E1122 07:12:50.535113 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:50 crc kubenswrapper[4858]: E1122 07:12:50.535198 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:51 crc kubenswrapper[4858]: I1122 07:12:51.535021 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:51 crc kubenswrapper[4858]: E1122 07:12:51.535185 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:51 crc kubenswrapper[4858]: I1122 07:12:51.535422 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:51 crc kubenswrapper[4858]: E1122 07:12:51.535494 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:52 crc kubenswrapper[4858]: I1122 07:12:52.535236 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:52 crc kubenswrapper[4858]: E1122 07:12:52.535402 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:52 crc kubenswrapper[4858]: I1122 07:12:52.535618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:52 crc kubenswrapper[4858]: E1122 07:12:52.535790 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.218433 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/1.log" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.219263 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/0.log" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.219397 4858 generic.go:334] "Generic (PLEG): container finished" podID="a6492476-649f-4291-81c3-e6f5a6398b70" containerID="ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b" exitCode=1 Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.219436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerDied","Data":"ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b"} Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.219475 4858 scope.go:117] "RemoveContainer" containerID="63c1854ebbd536ba957bbe1bbb5c43e3e3bbc4bbae5d2ea693bbb46f2a9220ce" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.219925 4858 scope.go:117] "RemoveContainer" containerID="ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b" Nov 22 07:12:53 crc kubenswrapper[4858]: E1122 07:12:53.220112 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-56l5j_openshift-multus(a6492476-649f-4291-81c3-e6f5a6398b70)\"" pod="openshift-multus/multus-56l5j" podUID="a6492476-649f-4291-81c3-e6f5a6398b70" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.237614 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-d4mqw" podStartSLOduration=99.237594565 podStartE2EDuration="1m39.237594565s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:27.139257281 +0000 UTC m=+108.980680297" watchObservedRunningTime="2025-11-22 07:12:53.237594565 +0000 UTC m=+135.079017581" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.535783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:53 crc kubenswrapper[4858]: I1122 07:12:53.535964 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:53 crc kubenswrapper[4858]: E1122 07:12:53.536114 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:53 crc kubenswrapper[4858]: E1122 07:12:53.536243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:54 crc kubenswrapper[4858]: I1122 07:12:54.224441 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/1.log" Nov 22 07:12:54 crc kubenswrapper[4858]: I1122 07:12:54.535727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:54 crc kubenswrapper[4858]: I1122 07:12:54.535757 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:54 crc kubenswrapper[4858]: E1122 07:12:54.536503 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:54 crc kubenswrapper[4858]: E1122 07:12:54.537185 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:54 crc kubenswrapper[4858]: E1122 07:12:54.860117 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:55 crc kubenswrapper[4858]: I1122 07:12:55.535297 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:55 crc kubenswrapper[4858]: I1122 07:12:55.535379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:55 crc kubenswrapper[4858]: E1122 07:12:55.535470 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:55 crc kubenswrapper[4858]: E1122 07:12:55.535602 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:56 crc kubenswrapper[4858]: I1122 07:12:56.535298 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:56 crc kubenswrapper[4858]: I1122 07:12:56.535418 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:56 crc kubenswrapper[4858]: E1122 07:12:56.535508 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:56 crc kubenswrapper[4858]: E1122 07:12:56.535584 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:57 crc kubenswrapper[4858]: I1122 07:12:57.535071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:57 crc kubenswrapper[4858]: E1122 07:12:57.535203 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:57 crc kubenswrapper[4858]: I1122 07:12:57.535077 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:57 crc kubenswrapper[4858]: E1122 07:12:57.535305 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:58 crc kubenswrapper[4858]: I1122 07:12:58.535239 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:12:58 crc kubenswrapper[4858]: I1122 07:12:58.535350 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:58 crc kubenswrapper[4858]: E1122 07:12:58.535433 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:12:58 crc kubenswrapper[4858]: E1122 07:12:58.535538 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:59 crc kubenswrapper[4858]: I1122 07:12:59.535050 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:59 crc kubenswrapper[4858]: I1122 07:12:59.535435 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:59 crc kubenswrapper[4858]: E1122 07:12:59.536047 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:59 crc kubenswrapper[4858]: E1122 07:12:59.536450 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:59 crc kubenswrapper[4858]: E1122 07:12:59.860708 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:13:00 crc kubenswrapper[4858]: I1122 07:13:00.535565 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:00 crc kubenswrapper[4858]: I1122 07:13:00.535596 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:00 crc kubenswrapper[4858]: E1122 07:13:00.535729 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:13:00 crc kubenswrapper[4858]: E1122 07:13:00.536070 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:00 crc kubenswrapper[4858]: I1122 07:13:00.537080 4858 scope.go:117] "RemoveContainer" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.247714 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/3.log" Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.250637 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerStarted","Data":"00929b1df5a26e30a2c1037684fb453f88d28c10c6b8fded2bae6634a9c69e77"} Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.250959 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.276922 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podStartSLOduration=107.276902364 podStartE2EDuration="1m47.276902364s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:01.276608775 +0000 UTC m=+143.118031781" watchObservedRunningTime="2025-11-22 07:13:01.276902364 +0000 UTC m=+143.118325390" Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.511335 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-m2bfv"] Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.511452 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:01 crc kubenswrapper[4858]: E1122 07:13:01.511532 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.535491 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:01 crc kubenswrapper[4858]: I1122 07:13:01.535528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:01 crc kubenswrapper[4858]: E1122 07:13:01.535647 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:01 crc kubenswrapper[4858]: E1122 07:13:01.535765 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:02 crc kubenswrapper[4858]: I1122 07:13:02.535371 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:02 crc kubenswrapper[4858]: E1122 07:13:02.535738 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:03 crc kubenswrapper[4858]: I1122 07:13:03.535101 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:03 crc kubenswrapper[4858]: E1122 07:13:03.535258 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:03 crc kubenswrapper[4858]: I1122 07:13:03.535362 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:03 crc kubenswrapper[4858]: I1122 07:13:03.535104 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:03 crc kubenswrapper[4858]: E1122 07:13:03.535502 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:13:03 crc kubenswrapper[4858]: E1122 07:13:03.535742 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:04 crc kubenswrapper[4858]: I1122 07:13:04.534842 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:04 crc kubenswrapper[4858]: E1122 07:13:04.535104 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:04 crc kubenswrapper[4858]: I1122 07:13:04.535244 4858 scope.go:117] "RemoveContainer" containerID="ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b" Nov 22 07:13:04 crc kubenswrapper[4858]: E1122 07:13:04.862014 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:13:05 crc kubenswrapper[4858]: I1122 07:13:05.267795 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/1.log" Nov 22 07:13:05 crc kubenswrapper[4858]: I1122 07:13:05.267887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerStarted","Data":"19b7b8eef55f72f28c31aec38d6e3551fe3cdddeddb0d1c8f92ce3bec9d5c1d8"} Nov 22 07:13:05 crc kubenswrapper[4858]: I1122 07:13:05.534928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:05 crc kubenswrapper[4858]: I1122 07:13:05.534928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:05 crc kubenswrapper[4858]: I1122 07:13:05.534978 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:05 crc kubenswrapper[4858]: E1122 07:13:05.535372 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:13:05 crc kubenswrapper[4858]: E1122 07:13:05.535432 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:05 crc kubenswrapper[4858]: E1122 07:13:05.535702 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:06 crc kubenswrapper[4858]: I1122 07:13:06.534975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:06 crc kubenswrapper[4858]: E1122 07:13:06.535113 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:07 crc kubenswrapper[4858]: I1122 07:13:07.535730 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:07 crc kubenswrapper[4858]: I1122 07:13:07.535785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:07 crc kubenswrapper[4858]: I1122 07:13:07.535884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:07 crc kubenswrapper[4858]: E1122 07:13:07.535879 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:07 crc kubenswrapper[4858]: E1122 07:13:07.535969 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:13:07 crc kubenswrapper[4858]: E1122 07:13:07.536033 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:08 crc kubenswrapper[4858]: I1122 07:13:08.534765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:08 crc kubenswrapper[4858]: E1122 07:13:08.534891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:09 crc kubenswrapper[4858]: I1122 07:13:09.534632 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:09 crc kubenswrapper[4858]: I1122 07:13:09.534745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:09 crc kubenswrapper[4858]: I1122 07:13:09.535940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:09 crc kubenswrapper[4858]: E1122 07:13:09.536048 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m2bfv" podUID="668a4495-5031-4084-9b05-d5d73dd20613" Nov 22 07:13:09 crc kubenswrapper[4858]: E1122 07:13:09.535926 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:09 crc kubenswrapper[4858]: E1122 07:13:09.536150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:10 crc kubenswrapper[4858]: I1122 07:13:10.535025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:10 crc kubenswrapper[4858]: I1122 07:13:10.537812 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 22 07:13:10 crc kubenswrapper[4858]: I1122 07:13:10.539829 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.535539 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.535611 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.535820 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.543743 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.543966 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.544399 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 22 07:13:11 crc kubenswrapper[4858]: I1122 07:13:11.544589 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 22 07:13:15 crc kubenswrapper[4858]: I1122 07:13:15.312712 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:13:15 crc kubenswrapper[4858]: I1122 07:13:15.312795 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:13:15 crc kubenswrapper[4858]: I1122 07:13:15.344427 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.791690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.824383 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hs8qj"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.824979 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.825795 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rsm26"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.826309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.826382 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.826953 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.829404 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8grc"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.829920 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.834882 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.835234 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.838011 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5bh77"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839408 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4wlm"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839420 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839470 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839604 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839632 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839776 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.839952 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840025 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840043 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840076 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840147 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840159 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840213 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840251 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840513 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840499 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840569 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.840979 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841129 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841374 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841603 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841649 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841728 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841757 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841845 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841863 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.841954 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.842050 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.843318 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.843826 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.844140 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.844462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.844613 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gtcln"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.844953 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.845924 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.846280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.847190 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.847811 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.850950 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bgn27"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.851621 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2scn9"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.851918 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.852289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.852469 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.852632 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.852684 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.853027 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ktsq"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.853571 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.855385 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.855663 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.855807 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.856624 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.863456 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.863751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.864023 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.864643 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.866453 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.868505 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.868725 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.871339 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.892248 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.893330 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.893444 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894272 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894343 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894480 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894593 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894688 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894786 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.894897 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895037 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895165 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895273 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895404 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895513 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895631 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.895741 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.896004 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.896122 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.896226 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.896511 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vrgkv"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.896983 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897041 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897254 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897369 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hs8qj"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897408 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897534 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897620 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897646 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897770 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897847 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897860 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897880 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897940 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.897974 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898020 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898095 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898119 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898169 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898199 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898281 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898313 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.898381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.899303 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.900097 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.900307 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.900543 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.900666 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.901869 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.901935 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kgzjd"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.902535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.903149 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.905932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.906487 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.911115 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.911686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.912049 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.914982 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.915769 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.916687 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.916805 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.918612 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.919896 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6bl5t"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.919987 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.920667 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-oauth-config\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxvs9\" (UniqueName: \"kubernetes.io/projected/d6c7906f-ca7f-4b22-ab70-b38aad08121f-kube-api-access-mxvs9\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921548 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-serving-cert\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921566 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-etcd-client\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921600 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6c7906f-ca7f-4b22-ab70-b38aad08121f-config\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-config\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c36ffc53-b39c-4fff-b40e-0e618701060a-trusted-ca\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55bbbc0a-48ea-4633-b49b-3869f873c64f-serving-cert\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4322806d-7a81-49aa-9e44-638c6cab8e57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfxr\" (UniqueName: \"kubernetes.io/projected/55c7556d-4740-4be7-bc47-f81c4c7374c6-kube-api-access-xhfxr\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921745 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921763 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d6c7906f-ca7f-4b22-ab70-b38aad08121f-machine-approver-tls\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-config\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/593b796b-f4d8-4c80-b84f-38f74cfbd37b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-service-ca-bundle\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36ffc53-b39c-4fff-b40e-0e618701060a-serving-cert\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8sdc\" (UniqueName: \"kubernetes.io/projected/005c94b6-beb8-49e1-93e2-119bc01cd795-kube-api-access-q8sdc\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/55c7556d-4740-4be7-bc47-f81c4c7374c6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tppws\" (UniqueName: \"kubernetes.io/projected/82c86211-6b1e-41e0-80b6-898aec0123a3-kube-api-access-tppws\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4322806d-7a81-49aa-9e44-638c6cab8e57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.921974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-trusted-ca-bundle\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/593b796b-f4d8-4c80-b84f-38f74cfbd37b-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922027 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-serving-cert\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c7556d-4740-4be7-bc47-f81c4c7374c6-config\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922060 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzjx\" (UniqueName: \"kubernetes.io/projected/c36ffc53-b39c-4fff-b40e-0e618701060a-kube-api-access-8wzjx\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/005c94b6-beb8-49e1-93e2-119bc01cd795-node-pullsecrets\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922099 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-serving-cert\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-serving-cert\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c7906f-ca7f-4b22-ab70-b38aad08121f-auth-proxy-config\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62gq\" (UniqueName: \"kubernetes.io/projected/55bbbc0a-48ea-4633-b49b-3869f873c64f-kube-api-access-m62gq\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a148e8ff-7b01-4625-b5db-76eec5c1469e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-console-config\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-config\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-etcd-client\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.922259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.939139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c84d713-f4f9-4968-a086-95187d89c9c1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r79gp\" (UID: \"8c84d713-f4f9-4968-a086-95187d89c9c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.939237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.939287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-policies\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.939351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-dir\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941011 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941059 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-oauth-serving-cert\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941099 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-audit\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-encryption-config\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82c86211-6b1e-41e0-80b6-898aec0123a3-audit-dir\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-encryption-config\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941491 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvbd\" (UniqueName: \"kubernetes.io/projected/e79f7ebf-0dac-4f86-b3f1-045904313fba-kube-api-access-8vvbd\") pod \"downloads-7954f5f757-bgn27\" (UID: \"e79f7ebf-0dac-4f86-b3f1-045904313fba\") " pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-config\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941678 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-serving-cert\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn462\" (UniqueName: \"kubernetes.io/projected/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-kube-api-access-wn462\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-etcd-serving-ca\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgr5\" (UniqueName: \"kubernetes.io/projected/4322806d-7a81-49aa-9e44-638c6cab8e57-kube-api-access-ltgr5\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55c7556d-4740-4be7-bc47-f81c4c7374c6-images\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36ffc53-b39c-4fff-b40e-0e618701060a-config\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-client-ca\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.941979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs5jn\" (UniqueName: \"kubernetes.io/projected/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-kube-api-access-xs5jn\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942020 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/005c94b6-beb8-49e1-93e2-119bc01cd795-audit-dir\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4bv2\" (UniqueName: \"kubernetes.io/projected/62cf2e91-277d-4243-93f5-7cc9416f3f6e-kube-api-access-h4bv2\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a148e8ff-7b01-4625-b5db-76eec5c1469e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjmd\" (UniqueName: \"kubernetes.io/projected/8c84d713-f4f9-4968-a086-95187d89c9c1-kube-api-access-7sjmd\") pod \"cluster-samples-operator-665b6dd947-r79gp\" (UID: \"8c84d713-f4f9-4968-a086-95187d89c9c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fchmc\" (UniqueName: \"kubernetes.io/projected/593b796b-f4d8-4c80-b84f-38f74cfbd37b-kube-api-access-fchmc\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgc87\" (UniqueName: \"kubernetes.io/projected/a148e8ff-7b01-4625-b5db-76eec5c1469e-kube-api-access-sgc87\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.943233 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.942331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-service-ca\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.943642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxg77\" (UniqueName: \"kubernetes.io/projected/6af73c1f-5d33-4e17-8331-61cf5b084487-kube-api-access-rxg77\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.943682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-image-import-ca\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.950103 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.950136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.950162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-audit-policies\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.950180 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.950201 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-client-ca\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.950831 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.952088 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-znzs2"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.952143 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.952497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.953114 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.954092 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.954296 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.954758 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.955120 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.955257 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.969938 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.974588 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.975054 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xllks"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.975498 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.975642 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.976546 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.977066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.979171 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t589f"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.979662 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gtcln"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.979759 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.981433 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.982194 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.990378 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.991716 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rsm26"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.993540 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.993751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.994045 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.994203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.995167 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.998517 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s"] Nov 22 07:13:16 crc kubenswrapper[4858]: I1122 07:13:16.999282 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.000014 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.000663 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.001431 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.001784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.004306 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nwq72"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.005121 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.006376 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.006786 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-55rnx"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.007625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.007809 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.008179 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qbgwx"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.008831 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.011275 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8grc"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.012382 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.012769 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.015449 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.016266 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.016395 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.016997 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4wlm"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.019236 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.019254 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.021498 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ktsq"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.022587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bgn27"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.023649 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.024813 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.025914 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2scn9"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.026938 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vrgkv"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.028302 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.038229 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-znzs2"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.040864 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.042945 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.044997 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.047910 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.050517 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-config\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051375 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6c7906f-ca7f-4b22-ab70-b38aad08121f-config\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c36ffc53-b39c-4fff-b40e-0e618701060a-trusted-ca\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051479 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfxr\" (UniqueName: \"kubernetes.io/projected/55c7556d-4740-4be7-bc47-f81c4c7374c6-kube-api-access-xhfxr\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55bbbc0a-48ea-4633-b49b-3869f873c64f-serving-cert\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4322806d-7a81-49aa-9e44-638c6cab8e57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d6c7906f-ca7f-4b22-ab70-b38aad08121f-machine-approver-tls\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-config\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4322806d-7a81-49aa-9e44-638c6cab8e57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.051832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-config\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053668 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/593b796b-f4d8-4c80-b84f-38f74cfbd37b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36ffc53-b39c-4fff-b40e-0e618701060a-serving-cert\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8sdc\" (UniqueName: \"kubernetes.io/projected/005c94b6-beb8-49e1-93e2-119bc01cd795-kube-api-access-q8sdc\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/55c7556d-4740-4be7-bc47-f81c4c7374c6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.053756 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tppws\" (UniqueName: \"kubernetes.io/projected/82c86211-6b1e-41e0-80b6-898aec0123a3-kube-api-access-tppws\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.054516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/593b796b-f4d8-4c80-b84f-38f74cfbd37b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.055650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6c7906f-ca7f-4b22-ab70-b38aad08121f-config\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.056685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.057601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-service-ca-bundle\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.056727 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-service-ca-bundle\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.057893 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kgzjd"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.057979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.058019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4322806d-7a81-49aa-9e44-638c6cab8e57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.058160 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-trusted-ca-bundle\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.058225 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c36ffc53-b39c-4fff-b40e-0e618701060a-trusted-ca\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.058282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/593b796b-f4d8-4c80-b84f-38f74cfbd37b-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.062959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-serving-cert\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.063005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c7556d-4740-4be7-bc47-f81c4c7374c6-config\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.063028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/005c94b6-beb8-49e1-93e2-119bc01cd795-node-pullsecrets\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.063050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wzjx\" (UniqueName: \"kubernetes.io/projected/c36ffc53-b39c-4fff-b40e-0e618701060a-kube-api-access-8wzjx\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.063070 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-serving-cert\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.063092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c7906f-ca7f-4b22-ab70-b38aad08121f-auth-proxy-config\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.064140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-trusted-ca-bundle\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.064706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c7556d-4740-4be7-bc47-f81c4c7374c6-config\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.066065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/005c94b6-beb8-49e1-93e2-119bc01cd795-node-pullsecrets\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.067492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.067726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.067845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-serving-cert\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.067897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.067985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d6c7906f-ca7f-4b22-ab70-b38aad08121f-machine-approver-tls\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.072159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4322806d-7a81-49aa-9e44-638c6cab8e57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.072361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-serving-cert\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.072590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/593b796b-f4d8-4c80-b84f-38f74cfbd37b-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.073111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-config\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.073220 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c7906f-ca7f-4b22-ab70-b38aad08121f-auth-proxy-config\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.073284 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5bh77"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.073518 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.073651 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074044 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36ffc53-b39c-4fff-b40e-0e618701060a-serving-cert\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55bbbc0a-48ea-4633-b49b-3869f873c64f-serving-cert\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62gq\" (UniqueName: \"kubernetes.io/projected/55bbbc0a-48ea-4633-b49b-3869f873c64f-kube-api-access-m62gq\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a148e8ff-7b01-4625-b5db-76eec5c1469e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-console-config\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-config\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-etcd-client\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c84d713-f4f9-4968-a086-95187d89c9c1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r79gp\" (UID: \"8c84d713-f4f9-4968-a086-95187d89c9c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-policies\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-dir\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-oauth-serving-cert\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82c86211-6b1e-41e0-80b6-898aec0123a3-audit-dir\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-audit\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-encryption-config\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074673 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-encryption-config\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.074972 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvbd\" (UniqueName: \"kubernetes.io/projected/e79f7ebf-0dac-4f86-b3f1-045904313fba-kube-api-access-8vvbd\") pod \"downloads-7954f5f757-bgn27\" (UID: \"e79f7ebf-0dac-4f86-b3f1-045904313fba\") " pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-config\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-serving-cert\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgr5\" (UniqueName: \"kubernetes.io/projected/4322806d-7a81-49aa-9e44-638c6cab8e57-kube-api-access-ltgr5\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075383 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn462\" (UniqueName: \"kubernetes.io/projected/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-kube-api-access-wn462\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075403 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-etcd-serving-ca\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55c7556d-4740-4be7-bc47-f81c4c7374c6-images\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-client-ca\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs5jn\" (UniqueName: \"kubernetes.io/projected/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-kube-api-access-xs5jn\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/005c94b6-beb8-49e1-93e2-119bc01cd795-audit-dir\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4bv2\" (UniqueName: \"kubernetes.io/projected/62cf2e91-277d-4243-93f5-7cc9416f3f6e-kube-api-access-h4bv2\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075554 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36ffc53-b39c-4fff-b40e-0e618701060a-config\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjmd\" (UniqueName: \"kubernetes.io/projected/8c84d713-f4f9-4968-a086-95187d89c9c1-kube-api-access-7sjmd\") pod \"cluster-samples-operator-665b6dd947-r79gp\" (UID: \"8c84d713-f4f9-4968-a086-95187d89c9c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fchmc\" (UniqueName: \"kubernetes.io/projected/593b796b-f4d8-4c80-b84f-38f74cfbd37b-kube-api-access-fchmc\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a148e8ff-7b01-4625-b5db-76eec5c1469e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075655 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-policies\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-service-ca\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxg77\" (UniqueName: \"kubernetes.io/projected/6af73c1f-5d33-4e17-8331-61cf5b084487-kube-api-access-rxg77\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-image-import-ca\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgc87\" (UniqueName: \"kubernetes.io/projected/a148e8ff-7b01-4625-b5db-76eec5c1469e-kube-api-access-sgc87\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075791 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-audit-policies\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075847 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-client-ca\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-oauth-config\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.075990 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxvs9\" (UniqueName: \"kubernetes.io/projected/d6c7906f-ca7f-4b22-ab70-b38aad08121f-kube-api-access-mxvs9\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-serving-cert\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gb45\" (UniqueName: \"kubernetes.io/projected/8d6bdadf-9903-44ef-b008-4f3864e83bb4-kube-api-access-9gb45\") pod \"migrator-59844c95c7-k74tq\" (UID: \"8d6bdadf-9903-44ef-b008-4f3864e83bb4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-etcd-client\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076377 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a148e8ff-7b01-4625-b5db-76eec5c1469e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82c86211-6b1e-41e0-80b6-898aec0123a3-audit-dir\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.076912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-dir\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.077287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.077369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-config\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.077886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-serving-cert\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.077906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-oauth-serving-cert\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.078161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-config\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.078386 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-console-config\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.078827 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-audit\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.079210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-service-ca\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.079286 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/005c94b6-beb8-49e1-93e2-119bc01cd795-audit-dir\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.080132 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36ffc53-b39c-4fff-b40e-0e618701060a-config\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.080771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55c7556d-4740-4be7-bc47-f81c4c7374c6-images\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.080958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.081144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-etcd-serving-ca\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.081870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-serving-cert\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.081919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55bbbc0a-48ea-4633-b49b-3869f873c64f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.081957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.082269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.082674 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-audit-policies\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.082735 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-client-ca\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.082803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/55c7556d-4740-4be7-bc47-f81c4c7374c6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.082974 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-client-ca\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.083190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-etcd-client\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.083715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-serving-cert\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.083766 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.084323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-oauth-config\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.084468 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/005c94b6-beb8-49e1-93e2-119bc01cd795-image-import-ca\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.084610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.085067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a148e8ff-7b01-4625-b5db-76eec5c1469e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.086032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.086088 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s7s7n"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.086859 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.087090 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.087567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-encryption-config\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.088189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.088302 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xsjfg"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.089195 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-etcd-client\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.089535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.089667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82c86211-6b1e-41e0-80b6-898aec0123a3-serving-cert\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.090040 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.091530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c84d713-f4f9-4968-a086-95187d89c9c1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r79gp\" (UID: \"8c84d713-f4f9-4968-a086-95187d89c9c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.091573 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.092936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.094480 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.096007 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.096174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.098877 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-55rnx"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.102680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t589f"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.104536 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.106046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6bl5t"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.107975 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.109187 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.110758 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.112492 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nwq72"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.114047 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qbgwx"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.116707 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s7s7n"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.119239 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xsjfg"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.120869 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.125298 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gx8tl"] Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.126697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.126955 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.150041 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.167171 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.177612 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gb45\" (UniqueName: \"kubernetes.io/projected/8d6bdadf-9903-44ef-b008-4f3864e83bb4-kube-api-access-9gb45\") pod \"migrator-59844c95c7-k74tq\" (UID: \"8d6bdadf-9903-44ef-b008-4f3864e83bb4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.187546 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.206998 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.228408 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.235392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/005c94b6-beb8-49e1-93e2-119bc01cd795-encryption-config\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.238146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/82c86211-6b1e-41e0-80b6-898aec0123a3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.247543 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.269066 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.288237 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.308068 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.336031 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.351627 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.387870 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.407275 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.427988 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.447825 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.467452 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.495467 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.506895 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.527564 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.547747 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.566947 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.588007 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.607565 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.627193 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.646878 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.667109 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.687892 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.707853 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.727465 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.748610 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.767942 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.787638 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.807432 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.827817 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.847424 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.868661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.886688 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.907776 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.926530 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.947410 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.967934 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.985767 4858 request.go:700] Waited for 1.003172091s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcollect-profiles-dockercfg-kzf4t&limit=500&resourceVersion=0 Nov 22 07:13:17 crc kubenswrapper[4858]: I1122 07:13:17.987403 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.008193 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.027257 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.068060 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.087981 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.107558 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.127548 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.148417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.168040 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.192636 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.208145 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.227424 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.248435 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.267475 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.288148 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.306978 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.327085 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.347608 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.367564 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.387759 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.406909 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.427460 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.447022 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.466975 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.487554 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.507172 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.526739 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.547533 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.568153 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.587635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.614297 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.627551 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.648379 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.667348 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.687901 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.726171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfxr\" (UniqueName: \"kubernetes.io/projected/55c7556d-4740-4be7-bc47-f81c4c7374c6-kube-api-access-xhfxr\") pod \"machine-api-operator-5694c8668f-hs8qj\" (UID: \"55c7556d-4740-4be7-bc47-f81c4c7374c6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.747480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tppws\" (UniqueName: \"kubernetes.io/projected/82c86211-6b1e-41e0-80b6-898aec0123a3-kube-api-access-tppws\") pod \"apiserver-7bbb656c7d-njg8w\" (UID: \"82c86211-6b1e-41e0-80b6-898aec0123a3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.762979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8sdc\" (UniqueName: \"kubernetes.io/projected/005c94b6-beb8-49e1-93e2-119bc01cd795-kube-api-access-q8sdc\") pod \"apiserver-76f77b778f-rsm26\" (UID: \"005c94b6-beb8-49e1-93e2-119bc01cd795\") " pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.781821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wzjx\" (UniqueName: \"kubernetes.io/projected/c36ffc53-b39c-4fff-b40e-0e618701060a-kube-api-access-8wzjx\") pod \"console-operator-58897d9998-2scn9\" (UID: \"c36ffc53-b39c-4fff-b40e-0e618701060a\") " pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.802240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62gq\" (UniqueName: \"kubernetes.io/projected/55bbbc0a-48ea-4633-b49b-3869f873c64f-kube-api-access-m62gq\") pod \"authentication-operator-69f744f599-5bh77\" (UID: \"55bbbc0a-48ea-4633-b49b-3869f873c64f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.821858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxg77\" (UniqueName: \"kubernetes.io/projected/6af73c1f-5d33-4e17-8331-61cf5b084487-kube-api-access-rxg77\") pod \"console-f9d7485db-gtcln\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.842435 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs5jn\" (UniqueName: \"kubernetes.io/projected/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-kube-api-access-xs5jn\") pod \"route-controller-manager-6576b87f9c-m8h8z\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.862175 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvbd\" (UniqueName: \"kubernetes.io/projected/e79f7ebf-0dac-4f86-b3f1-045904313fba-kube-api-access-8vvbd\") pod \"downloads-7954f5f757-bgn27\" (UID: \"e79f7ebf-0dac-4f86-b3f1-045904313fba\") " pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.882455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxvs9\" (UniqueName: \"kubernetes.io/projected/d6c7906f-ca7f-4b22-ab70-b38aad08121f-kube-api-access-mxvs9\") pod \"machine-approver-56656f9798-88dfg\" (UID: \"d6c7906f-ca7f-4b22-ab70-b38aad08121f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.902866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4bv2\" (UniqueName: \"kubernetes.io/projected/62cf2e91-277d-4243-93f5-7cc9416f3f6e-kube-api-access-h4bv2\") pod \"oauth-openshift-558db77b4-v4wlm\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.915406 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.921372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fchmc\" (UniqueName: \"kubernetes.io/projected/593b796b-f4d8-4c80-b84f-38f74cfbd37b-kube-api-access-fchmc\") pod \"openshift-config-operator-7777fb866f-g8grc\" (UID: \"593b796b-f4d8-4c80-b84f-38f74cfbd37b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.926005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.932758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.941901 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjmd\" (UniqueName: \"kubernetes.io/projected/8c84d713-f4f9-4968-a086-95187d89c9c1-kube-api-access-7sjmd\") pod \"cluster-samples-operator-665b6dd947-r79gp\" (UID: \"8c84d713-f4f9-4968-a086-95187d89c9c1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.944517 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.948368 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.955451 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.963209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgr5\" (UniqueName: \"kubernetes.io/projected/4322806d-7a81-49aa-9e44-638c6cab8e57-kube-api-access-ltgr5\") pod \"openshift-apiserver-operator-796bbdcf4f-28jfb\" (UID: \"4322806d-7a81-49aa-9e44-638c6cab8e57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.980948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn462\" (UniqueName: \"kubernetes.io/projected/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-kube-api-access-wn462\") pod \"controller-manager-879f6c89f-5ktsq\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:18 crc kubenswrapper[4858]: I1122 07:13:18.993511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.005986 4858 request.go:700] Waited for 1.918815924s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.008984 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.009650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgc87\" (UniqueName: \"kubernetes.io/projected/a148e8ff-7b01-4625-b5db-76eec5c1469e-kube-api-access-sgc87\") pod \"openshift-controller-manager-operator-756b6f6bc6-jm22m\" (UID: \"a148e8ff-7b01-4625-b5db-76eec5c1469e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.021701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.027464 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.050871 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.062420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.063729 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.074061 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.091820 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.098481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.110752 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.112747 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.116422 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.127712 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.148228 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.167603 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.188294 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.228617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gb45\" (UniqueName: \"kubernetes.io/projected/8d6bdadf-9903-44ef-b008-4f3864e83bb4-kube-api-access-9gb45\") pod \"migrator-59844c95c7-k74tq\" (UID: \"8d6bdadf-9903-44ef-b008-4f3864e83bb4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.241541 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.265543 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.299665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4458c793-c7d2-400a-876a-2724099c5c3a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.299704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9a5d871-647f-4486-8bb9-14e65650c259-proxy-tls\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.299751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74x4j\" (UniqueName: \"kubernetes.io/projected/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-kube-api-access-74x4j\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.299990 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-bound-sa-token\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46523c-2f95-4946-b4e3-f7869cdda903-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-service-ca\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300284 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c93abfad-9450-43d5-824c-7ff52c2a613b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-znzs2\" (UID: \"c93abfad-9450-43d5-824c-7ff52c2a613b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb9f615-dc32-4f01-884b-db24dfb05c34-config-volume\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd747063-e6a4-407d-b448-f5f9197f3f3a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5zk\" (UniqueName: \"kubernetes.io/projected/7cb9f615-dc32-4f01-884b-db24dfb05c34-kube-api-access-wc5zk\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-client\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgb4n\" (UniqueName: \"kubernetes.io/projected/c93abfad-9450-43d5-824c-7ff52c2a613b-kube-api-access-wgb4n\") pod \"multus-admission-controller-857f4d67dd-znzs2\" (UID: \"c93abfad-9450-43d5-824c-7ff52c2a613b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce09591-28f0-4ab0-956d-694afdddaa86-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4458c793-c7d2-400a-876a-2724099c5c3a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-default-certificate\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knmbz\" (UniqueName: \"kubernetes.io/projected/bd747063-e6a4-407d-b448-f5f9197f3f3a-kube-api-access-knmbz\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.300987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9a5d871-647f-4486-8bb9-14e65650c259-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd747063-e6a4-407d-b448-f5f9197f3f3a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301052 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b46523c-2f95-4946-b4e3-f7869cdda903-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301353 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4458c793-c7d2-400a-876a-2724099c5c3a-config\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b46523c-2f95-4946-b4e3-f7869cdda903-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd747063-e6a4-407d-b448-f5f9197f3f3a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba0e5a3-9474-4dde-a3c3-52390b657290-service-ca-bundle\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301558 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk7bj\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-kube-api-access-fk7bj\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f9a5d871-647f-4486-8bb9-14e65650c259-images\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-stats-auth\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-trusted-ca\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/022ff96d-cffc-425d-8bce-d26d9ce573d3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.301865 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cec4f090-10a6-4206-b4a1-3876d40a0d4b-metrics-tls\") pod \"dns-operator-744455d44c-6bl5t\" (UID: \"cec4f090-10a6-4206-b4a1-3876d40a0d4b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-tls\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302069 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b28vq\" (UniqueName: \"kubernetes.io/projected/dc728b9f-ad18-4c6a-927a-b6670e080ad9-kube-api-access-b28vq\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302099 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb9f615-dc32-4f01-884b-db24dfb05c34-secret-volume\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztcw5\" (UniqueName: \"kubernetes.io/projected/dba0e5a3-9474-4dde-a3c3-52390b657290-kube-api-access-ztcw5\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302172 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce09591-28f0-4ab0-956d-694afdddaa86-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7fxj\" (UniqueName: \"kubernetes.io/projected/cec4f090-10a6-4206-b4a1-3876d40a0d4b-kube-api-access-z7fxj\") pod \"dns-operator-744455d44c-6bl5t\" (UID: \"cec4f090-10a6-4206-b4a1-3876d40a0d4b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-metrics-certs\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-ca\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302969 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt64\" (UniqueName: \"kubernetes.io/projected/f9a5d871-647f-4486-8bb9-14e65650c259-kube-api-access-cbt64\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.302998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ce09591-28f0-4ab0-956d-694afdddaa86-config\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.303515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dc728b9f-ad18-4c6a-927a-b6670e080ad9-signing-cabundle\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.303558 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dc728b9f-ad18-4c6a-927a-b6670e080ad9-signing-key\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.303952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/022ff96d-cffc-425d-8bce-d26d9ce573d3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.304000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-config\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.304088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.304233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-serving-cert\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.304271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-certificates\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.304535 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.804515879 +0000 UTC m=+161.645938885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.320588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.322845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" event={"ID":"d6c7906f-ca7f-4b22-ab70-b38aad08121f","Type":"ContainerStarted","Data":"96444c83b15d65bc20373371f867c8f081e78bf826370be8099a510e6aec7b82"} Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410117 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.410378 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.910342965 +0000 UTC m=+161.751765971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410463 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dc728b9f-ad18-4c6a-927a-b6670e080ad9-signing-key\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324dfa6f-8489-4776-b7bf-9bf6a489478b-metrics-tls\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/022ff96d-cffc-425d-8bce-d26d9ce573d3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-config\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-apiservice-cert\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqwzq\" (UniqueName: \"kubernetes.io/projected/324dfa6f-8489-4776-b7bf-9bf6a489478b-kube-api-access-mqwzq\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-serving-cert\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-certificates\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4458c793-c7d2-400a-876a-2724099c5c3a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4fm\" (UniqueName: \"kubernetes.io/projected/53f7564e-935e-41b1-bf5a-58d1d509a014-kube-api-access-kn4fm\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbgq\" (UniqueName: \"kubernetes.io/projected/29e8f1af-f22c-44df-9493-8886f7d4045b-kube-api-access-tnbgq\") pod \"ingress-canary-s7s7n\" (UID: \"29e8f1af-f22c-44df-9493-8886f7d4045b\") " pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/324dfa6f-8489-4776-b7bf-9bf6a489478b-trusted-ca\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9a5d871-647f-4486-8bb9-14e65650c259-proxy-tls\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2051416-ad0d-472b-bcf1-b3b236ab3c6e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8ptsp\" (UID: \"e2051416-ad0d-472b-bcf1-b3b236ab3c6e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d33abf93-ea42-48e0-82e1-6afc9c4542ca-certs\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410795 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-mountpoint-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74x4j\" (UniqueName: \"kubernetes.io/projected/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-kube-api-access-74x4j\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-bound-sa-token\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46523c-2f95-4946-b4e3-f7869cdda903-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9l9\" (UniqueName: \"kubernetes.io/projected/d33abf93-ea42-48e0-82e1-6afc9c4542ca-kube-api-access-dc9l9\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-service-ca\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410949 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d21a3b57-d4c4-474a-8be9-75c59a385d92-srv-cert\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c93abfad-9450-43d5-824c-7ff52c2a613b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-znzs2\" (UID: \"c93abfad-9450-43d5-824c-7ff52c2a613b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.410986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-plugins-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb9f615-dc32-4f01-884b-db24dfb05c34-config-volume\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411024 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6959fc44-3e32-4848-85cf-963d3f7e8c16-srv-cert\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5555fba1-f639-4d9b-992b-203cc9a88938-metrics-tls\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd747063-e6a4-407d-b448-f5f9197f3f3a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29e8f1af-f22c-44df-9493-8886f7d4045b-cert\") pod \"ingress-canary-s7s7n\" (UID: \"29e8f1af-f22c-44df-9493-8886f7d4045b\") " pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5555fba1-f639-4d9b-992b-203cc9a88938-config-volume\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411130 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc5zk\" (UniqueName: \"kubernetes.io/projected/7cb9f615-dc32-4f01-884b-db24dfb05c34-kube-api-access-wc5zk\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411148 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/324dfa6f-8489-4776-b7bf-9bf6a489478b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-client\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgb4n\" (UniqueName: \"kubernetes.io/projected/c93abfad-9450-43d5-824c-7ff52c2a613b-kube-api-access-wgb4n\") pod \"multus-admission-controller-857f4d67dd-znzs2\" (UID: \"c93abfad-9450-43d5-824c-7ff52c2a613b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce09591-28f0-4ab0-956d-694afdddaa86-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411242 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791e360a-39d1-48f6-9e9e-2e768a1710ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7vz\" (UniqueName: \"kubernetes.io/projected/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-kube-api-access-hv7vz\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4458c793-c7d2-400a-876a-2724099c5c3a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411299 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-default-certificate\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-webhook-cert\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf6j7\" (UniqueName: \"kubernetes.io/projected/556a50cc-cb65-4fb5-bee5-a88cbfd40341-kube-api-access-qf6j7\") pod \"package-server-manager-789f6589d5-ngf52\" (UID: \"556a50cc-cb65-4fb5-bee5-a88cbfd40341\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-proxy-tls\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75bg\" (UniqueName: \"kubernetes.io/projected/791e360a-39d1-48f6-9e9e-2e768a1710ad-kube-api-access-r75bg\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knmbz\" (UniqueName: \"kubernetes.io/projected/bd747063-e6a4-407d-b448-f5f9197f3f3a-kube-api-access-knmbz\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9kg\" (UniqueName: \"kubernetes.io/projected/d21a3b57-d4c4-474a-8be9-75c59a385d92-kube-api-access-9f9kg\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9a5d871-647f-4486-8bb9-14e65650c259-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd747063-e6a4-407d-b448-f5f9197f3f3a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b46523c-2f95-4946-b4e3-f7869cdda903-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4458c793-c7d2-400a-876a-2724099c5c3a-config\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b46523c-2f95-4946-b4e3-f7869cdda903-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd747063-e6a4-407d-b448-f5f9197f3f3a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba0e5a3-9474-4dde-a3c3-52390b657290-service-ca-bundle\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgc8h\" (UniqueName: \"kubernetes.io/projected/5555fba1-f639-4d9b-992b-203cc9a88938-kube-api-access-lgc8h\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csdf\" (UniqueName: \"kubernetes.io/projected/a2875d04-e612-4175-befc-3a62d7ea9daf-kube-api-access-2csdf\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-csi-data-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411734 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-tmpfs\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411760 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk7bj\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-kube-api-access-fk7bj\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f9a5d871-647f-4486-8bb9-14e65650c259-images\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-stats-auth\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/791e360a-39d1-48f6-9e9e-2e768a1710ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411838 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-trusted-ca\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/022ff96d-cffc-425d-8bce-d26d9ce573d3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-registration-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vd77\" (UniqueName: \"kubernetes.io/projected/e2051416-ad0d-472b-bcf1-b3b236ab3c6e-kube-api-access-7vd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-8ptsp\" (UID: \"e2051416-ad0d-472b-bcf1-b3b236ab3c6e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2875d04-e612-4175-befc-3a62d7ea9daf-serving-cert\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cec4f090-10a6-4206-b4a1-3876d40a0d4b-metrics-tls\") pod \"dns-operator-744455d44c-6bl5t\" (UID: \"cec4f090-10a6-4206-b4a1-3876d40a0d4b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bkr\" (UniqueName: \"kubernetes.io/projected/6959fc44-3e32-4848-85cf-963d3f7e8c16-kube-api-access-r8bkr\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.411980 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-tls\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/556a50cc-cb65-4fb5-bee5-a88cbfd40341-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ngf52\" (UID: \"556a50cc-cb65-4fb5-bee5-a88cbfd40341\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b28vq\" (UniqueName: \"kubernetes.io/projected/dc728b9f-ad18-4c6a-927a-b6670e080ad9-kube-api-access-b28vq\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb9f615-dc32-4f01-884b-db24dfb05c34-secret-volume\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztcw5\" (UniqueName: \"kubernetes.io/projected/dba0e5a3-9474-4dde-a3c3-52390b657290-kube-api-access-ztcw5\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce09591-28f0-4ab0-956d-694afdddaa86-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6959fc44-3e32-4848-85cf-963d3f7e8c16-profile-collector-cert\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgzhg\" (UniqueName: \"kubernetes.io/projected/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-kube-api-access-rgzhg\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d33abf93-ea42-48e0-82e1-6afc9c4542ca-node-bootstrap-token\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7fxj\" (UniqueName: \"kubernetes.io/projected/cec4f090-10a6-4206-b4a1-3876d40a0d4b-kube-api-access-z7fxj\") pod \"dns-operator-744455d44c-6bl5t\" (UID: \"cec4f090-10a6-4206-b4a1-3876d40a0d4b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-metrics-certs\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412220 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tstsl\" (UniqueName: \"kubernetes.io/projected/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-kube-api-access-tstsl\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-socket-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-ca\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbt64\" (UniqueName: \"kubernetes.io/projected/f9a5d871-647f-4486-8bb9-14e65650c259-kube-api-access-cbt64\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ce09591-28f0-4ab0-956d-694afdddaa86-config\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412477 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2875d04-e612-4175-befc-3a62d7ea9daf-config\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dc728b9f-ad18-4c6a-927a-b6670e080ad9-signing-cabundle\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.412523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d21a3b57-d4c4-474a-8be9-75c59a385d92-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.415168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4458c793-c7d2-400a-876a-2724099c5c3a-config\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.415839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dc728b9f-ad18-4c6a-927a-b6670e080ad9-signing-key\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.415912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-config\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.416180 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.916163918 +0000 UTC m=+161.757586994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.416237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f9a5d871-647f-4486-8bb9-14e65650c259-images\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.416252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/022ff96d-cffc-425d-8bce-d26d9ce573d3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.416776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/022ff96d-cffc-425d-8bce-d26d9ce573d3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.417417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-trusted-ca\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.417898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-certificates\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.418057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9a5d871-647f-4486-8bb9-14e65650c259-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.418202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-ca\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.418951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ce09591-28f0-4ab0-956d-694afdddaa86-config\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.419666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dc728b9f-ad18-4c6a-927a-b6670e080ad9-signing-cabundle\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.421252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b46523c-2f95-4946-b4e3-f7869cdda903-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.421928 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-service-ca\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.423836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce09591-28f0-4ab0-956d-694afdddaa86-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.424661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb9f615-dc32-4f01-884b-db24dfb05c34-config-volume\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.425282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4458c793-c7d2-400a-876a-2724099c5c3a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.425416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd747063-e6a4-407d-b448-f5f9197f3f3a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.427599 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2scn9"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.428177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cec4f090-10a6-4206-b4a1-3876d40a0d4b-metrics-tls\") pod \"dns-operator-744455d44c-6bl5t\" (UID: \"cec4f090-10a6-4206-b4a1-3876d40a0d4b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.428733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.429454 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-tls\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.430724 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb9f615-dc32-4f01-884b-db24dfb05c34-secret-volume\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.430766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b46523c-2f95-4946-b4e3-f7869cdda903-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.431115 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9a5d871-647f-4486-8bb9-14e65650c259-proxy-tls\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.431187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c93abfad-9450-43d5-824c-7ff52c2a613b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-znzs2\" (UID: \"c93abfad-9450-43d5-824c-7ff52c2a613b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.431268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd747063-e6a4-407d-b448-f5f9197f3f3a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.433521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-serving-cert\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.437593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-etcd-client\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.444163 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gtcln"] Nov 22 07:13:19 crc kubenswrapper[4858]: W1122 07:13:19.462124 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af73c1f_5d33_4e17_8331_61cf5b084487.slice/crio-52e05b837c3954dc482fd1ce004877735a867a11e7c546d2601e66993682c3d4 WatchSource:0}: Error finding container 52e05b837c3954dc482fd1ce004877735a867a11e7c546d2601e66993682c3d4: Status 404 returned error can't find the container with id 52e05b837c3954dc482fd1ce004877735a867a11e7c546d2601e66993682c3d4 Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.464457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b46523c-2f95-4946-b4e3-f7869cdda903-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vb9gv\" (UID: \"2b46523c-2f95-4946-b4e3-f7869cdda903\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.469084 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hs8qj"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.484798 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bgn27"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.489873 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd747063-e6a4-407d-b448-f5f9197f3f3a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.491812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba0e5a3-9474-4dde-a3c3-52390b657290-service-ca-bundle\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.497384 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-stats-auth\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.499231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-metrics-certs\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.501062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dba0e5a3-9474-4dde-a3c3-52390b657290-default-certificate\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.505497 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk7bj\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-kube-api-access-fk7bj\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgzhg\" (UniqueName: \"kubernetes.io/projected/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-kube-api-access-rgzhg\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d33abf93-ea42-48e0-82e1-6afc9c4542ca-node-bootstrap-token\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tstsl\" (UniqueName: \"kubernetes.io/projected/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-kube-api-access-tstsl\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-socket-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513612 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2875d04-e612-4175-befc-3a62d7ea9daf-config\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d21a3b57-d4c4-474a-8be9-75c59a385d92-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324dfa6f-8489-4776-b7bf-9bf6a489478b-metrics-tls\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqwzq\" (UniqueName: \"kubernetes.io/projected/324dfa6f-8489-4776-b7bf-9bf6a489478b-kube-api-access-mqwzq\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513715 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-apiservice-cert\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513748 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4fm\" (UniqueName: \"kubernetes.io/projected/53f7564e-935e-41b1-bf5a-58d1d509a014-kube-api-access-kn4fm\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513753 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5bh77"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbgq\" (UniqueName: \"kubernetes.io/projected/29e8f1af-f22c-44df-9493-8886f7d4045b-kube-api-access-tnbgq\") pod \"ingress-canary-s7s7n\" (UID: \"29e8f1af-f22c-44df-9493-8886f7d4045b\") " pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/324dfa6f-8489-4776-b7bf-9bf6a489478b-trusted-ca\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2051416-ad0d-472b-bcf1-b3b236ab3c6e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8ptsp\" (UID: \"e2051416-ad0d-472b-bcf1-b3b236ab3c6e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d33abf93-ea42-48e0-82e1-6afc9c4542ca-certs\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.513942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-mountpoint-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.513972 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.013952601 +0000 UTC m=+161.855375617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9l9\" (UniqueName: \"kubernetes.io/projected/d33abf93-ea42-48e0-82e1-6afc9c4542ca-kube-api-access-dc9l9\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514023 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-mountpoint-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d21a3b57-d4c4-474a-8be9-75c59a385d92-srv-cert\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-plugins-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6959fc44-3e32-4848-85cf-963d3f7e8c16-srv-cert\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5555fba1-f639-4d9b-992b-203cc9a88938-metrics-tls\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5555fba1-f639-4d9b-992b-203cc9a88938-config-volume\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514131 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29e8f1af-f22c-44df-9493-8886f7d4045b-cert\") pod \"ingress-canary-s7s7n\" (UID: \"29e8f1af-f22c-44df-9493-8886f7d4045b\") " pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/324dfa6f-8489-4776-b7bf-9bf6a489478b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514179 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791e360a-39d1-48f6-9e9e-2e768a1710ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv7vz\" (UniqueName: \"kubernetes.io/projected/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-kube-api-access-hv7vz\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-webhook-cert\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf6j7\" (UniqueName: \"kubernetes.io/projected/556a50cc-cb65-4fb5-bee5-a88cbfd40341-kube-api-access-qf6j7\") pod \"package-server-manager-789f6589d5-ngf52\" (UID: \"556a50cc-cb65-4fb5-bee5-a88cbfd40341\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-proxy-tls\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9kg\" (UniqueName: \"kubernetes.io/projected/d21a3b57-d4c4-474a-8be9-75c59a385d92-kube-api-access-9f9kg\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75bg\" (UniqueName: \"kubernetes.io/projected/791e360a-39d1-48f6-9e9e-2e768a1710ad-kube-api-access-r75bg\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgc8h\" (UniqueName: \"kubernetes.io/projected/5555fba1-f639-4d9b-992b-203cc9a88938-kube-api-access-lgc8h\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csdf\" (UniqueName: \"kubernetes.io/projected/a2875d04-e612-4175-befc-3a62d7ea9daf-kube-api-access-2csdf\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-csi-data-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-tmpfs\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/791e360a-39d1-48f6-9e9e-2e768a1710ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2875d04-e612-4175-befc-3a62d7ea9daf-serving-cert\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-registration-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vd77\" (UniqueName: \"kubernetes.io/projected/e2051416-ad0d-472b-bcf1-b3b236ab3c6e-kube-api-access-7vd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-8ptsp\" (UID: \"e2051416-ad0d-472b-bcf1-b3b236ab3c6e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bkr\" (UniqueName: \"kubernetes.io/projected/6959fc44-3e32-4848-85cf-963d3f7e8c16-kube-api-access-r8bkr\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/556a50cc-cb65-4fb5-bee5-a88cbfd40341-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ngf52\" (UID: \"556a50cc-cb65-4fb5-bee5-a88cbfd40341\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.514630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6959fc44-3e32-4848-85cf-963d3f7e8c16-profile-collector-cert\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.515568 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/324dfa6f-8489-4776-b7bf-9bf6a489478b-trusted-ca\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.516536 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.519723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d33abf93-ea42-48e0-82e1-6afc9c4542ca-certs\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.521164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2051416-ad0d-472b-bcf1-b3b236ab3c6e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8ptsp\" (UID: \"e2051416-ad0d-472b-bcf1-b3b236ab3c6e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.521895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6959fc44-3e32-4848-85cf-963d3f7e8c16-profile-collector-cert\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.522547 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-socket-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.522774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-plugins-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.522931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324dfa6f-8489-4776-b7bf-9bf6a489478b-metrics-tls\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.523579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2875d04-e612-4175-befc-3a62d7ea9daf-config\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.523652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-apiservice-cert\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.524425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.525215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-csi-data-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.525546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-tmpfs\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.526194 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/791e360a-39d1-48f6-9e9e-2e768a1710ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.526378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.526577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2875d04-e612-4175-befc-3a62d7ea9daf-serving-cert\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.527352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5555fba1-f639-4d9b-992b-203cc9a88938-metrics-tls\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.527965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/556a50cc-cb65-4fb5-bee5-a88cbfd40341-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ngf52\" (UID: \"556a50cc-cb65-4fb5-bee5-a88cbfd40341\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.528176 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53f7564e-935e-41b1-bf5a-58d1d509a014-registration-dir\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.528664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-webhook-cert\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.529511 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d21a3b57-d4c4-474a-8be9-75c59a385d92-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.530207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knmbz\" (UniqueName: \"kubernetes.io/projected/bd747063-e6a4-407d-b448-f5f9197f3f3a-kube-api-access-knmbz\") pod \"cluster-image-registry-operator-dc59b4c8b-jj8h6\" (UID: \"bd747063-e6a4-407d-b448-f5f9197f3f3a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.530573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d33abf93-ea42-48e0-82e1-6afc9c4542ca-node-bootstrap-token\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.535663 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/791e360a-39d1-48f6-9e9e-2e768a1710ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.538922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-proxy-tls\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.542095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29e8f1af-f22c-44df-9493-8886f7d4045b-cert\") pod \"ingress-canary-s7s7n\" (UID: \"29e8f1af-f22c-44df-9493-8886f7d4045b\") " pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.544103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6959fc44-3e32-4848-85cf-963d3f7e8c16-srv-cert\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.544652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d21a3b57-d4c4-474a-8be9-75c59a385d92-srv-cert\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.547114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbt64\" (UniqueName: \"kubernetes.io/projected/f9a5d871-647f-4486-8bb9-14e65650c259-kube-api-access-cbt64\") pod \"machine-config-operator-74547568cd-ndcl9\" (UID: \"f9a5d871-647f-4486-8bb9-14e65650c259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.557037 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rsm26"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.559900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgb4n\" (UniqueName: \"kubernetes.io/projected/c93abfad-9450-43d5-824c-7ff52c2a613b-kube-api-access-wgb4n\") pod \"multus-admission-controller-857f4d67dd-znzs2\" (UID: \"c93abfad-9450-43d5-824c-7ff52c2a613b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.579191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.581216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ce09591-28f0-4ab0-956d-694afdddaa86-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-slbpn\" (UID: \"5ce09591-28f0-4ab0-956d-694afdddaa86\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.582391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.582443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5555fba1-f639-4d9b-992b-203cc9a88938-config-volume\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: W1122 07:13:19.583317 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode79f7ebf_0dac_4f86_b3f1_045904313fba.slice/crio-b26dddd69ba5e5428947ffbe4a86b1fd85bf9bca28cdda4b6574d4d6ec22f51f WatchSource:0}: Error finding container b26dddd69ba5e5428947ffbe4a86b1fd85bf9bca28cdda4b6574d4d6ec22f51f: Status 404 returned error can't find the container with id b26dddd69ba5e5428947ffbe4a86b1fd85bf9bca28cdda4b6574d4d6ec22f51f Nov 22 07:13:19 crc kubenswrapper[4858]: W1122 07:13:19.583807 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82c86211_6b1e_41e0_80b6_898aec0123a3.slice/crio-f448c3bd8ced001bce7525391a3f6987e6f4a2797114eb9e97576b6cd80f16f0 WatchSource:0}: Error finding container f448c3bd8ced001bce7525391a3f6987e6f4a2797114eb9e97576b6cd80f16f0: Status 404 returned error can't find the container with id f448c3bd8ced001bce7525391a3f6987e6f4a2797114eb9e97576b6cd80f16f0 Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.593589 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.601499 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4458c793-c7d2-400a-876a-2724099c5c3a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dzz75\" (UID: \"4458c793-c7d2-400a-876a-2724099c5c3a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.615478 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.618958 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.619254 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.11924126 +0000 UTC m=+161.960664266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.622787 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.625989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztcw5\" (UniqueName: \"kubernetes.io/projected/dba0e5a3-9474-4dde-a3c3-52390b657290-kube-api-access-ztcw5\") pod \"router-default-5444994796-xllks\" (UID: \"dba0e5a3-9474-4dde-a3c3-52390b657290\") " pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.629600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.636572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.639908 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.642556 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4wlm"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.643630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b28vq\" (UniqueName: \"kubernetes.io/projected/dc728b9f-ad18-4c6a-927a-b6670e080ad9-kube-api-access-b28vq\") pod \"service-ca-9c57cc56f-t589f\" (UID: \"dc728b9f-ad18-4c6a-927a-b6670e080ad9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.645144 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.652977 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t589f" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.662682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7fxj\" (UniqueName: \"kubernetes.io/projected/cec4f090-10a6-4206-b4a1-3876d40a0d4b-kube-api-access-z7fxj\") pod \"dns-operator-744455d44c-6bl5t\" (UID: \"cec4f090-10a6-4206-b4a1-3876d40a0d4b\") " pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: W1122 07:13:19.666315 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda148e8ff_7b01_4625_b5db_76eec5c1469e.slice/crio-973685ec89d01b2126e8da32bf3f8b461dbb63532a56aa799612cfe882ab6277 WatchSource:0}: Error finding container 973685ec89d01b2126e8da32bf3f8b461dbb63532a56aa799612cfe882ab6277: Status 404 returned error can't find the container with id 973685ec89d01b2126e8da32bf3f8b461dbb63532a56aa799612cfe882ab6277 Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.676556 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.683966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74x4j\" (UniqueName: \"kubernetes.io/projected/9775c1d5-6743-456e-91b6-dc0ef2f4f5cb-kube-api-access-74x4j\") pod \"etcd-operator-b45778765-kgzjd\" (UID: \"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.706852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-bound-sa-token\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.709350 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8grc"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.721483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc5zk\" (UniqueName: \"kubernetes.io/projected/7cb9f615-dc32-4f01-884b-db24dfb05c34-kube-api-access-wc5zk\") pod \"collect-profiles-29396580-6456h\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.721531 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.721600 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.221581406 +0000 UTC m=+162.063004402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.722527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.724811 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.224794367 +0000 UTC m=+162.066217373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.766002 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbgq\" (UniqueName: \"kubernetes.io/projected/29e8f1af-f22c-44df-9493-8886f7d4045b-kube-api-access-tnbgq\") pod \"ingress-canary-s7s7n\" (UID: \"29e8f1af-f22c-44df-9493-8886f7d4045b\") " pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.789829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tstsl\" (UniqueName: \"kubernetes.io/projected/779f6711-3ae2-4fdf-b7a2-8755c5373eb3-kube-api-access-tstsl\") pod \"machine-config-controller-84d6567774-fd8qn\" (UID: \"779f6711-3ae2-4fdf-b7a2-8755c5373eb3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.790987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.798175 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s7s7n" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.802238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.815105 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgzhg\" (UniqueName: \"kubernetes.io/projected/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-kube-api-access-rgzhg\") pod \"marketplace-operator-79b997595-qbgwx\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.823549 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.824008 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.323994083 +0000 UTC m=+162.165417089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.824169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9l9\" (UniqueName: \"kubernetes.io/projected/d33abf93-ea42-48e0-82e1-6afc9c4542ca-kube-api-access-dc9l9\") pod \"machine-config-server-gx8tl\" (UID: \"d33abf93-ea42-48e0-82e1-6afc9c4542ca\") " pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.825876 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ktsq"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.846101 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqwzq\" (UniqueName: \"kubernetes.io/projected/324dfa6f-8489-4776-b7bf-9bf6a489478b-kube-api-access-mqwzq\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.868237 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6"] Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.870189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4fm\" (UniqueName: \"kubernetes.io/projected/53f7564e-935e-41b1-bf5a-58d1d509a014-kube-api-access-kn4fm\") pod \"csi-hostpathplugin-55rnx\" (UID: \"53f7564e-935e-41b1-bf5a-58d1d509a014\") " pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.886034 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgc8h\" (UniqueName: \"kubernetes.io/projected/5555fba1-f639-4d9b-992b-203cc9a88938-kube-api-access-lgc8h\") pod \"dns-default-xsjfg\" (UID: \"5555fba1-f639-4d9b-992b-203cc9a88938\") " pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.887815 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.904029 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.904572 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bkr\" (UniqueName: \"kubernetes.io/projected/6959fc44-3e32-4848-85cf-963d3f7e8c16-kube-api-access-r8bkr\") pod \"catalog-operator-68c6474976-klt75\" (UID: \"6959fc44-3e32-4848-85cf-963d3f7e8c16\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.921011 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/324dfa6f-8489-4776-b7bf-9bf6a489478b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jnfrt\" (UID: \"324dfa6f-8489-4776-b7bf-9bf6a489478b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.925447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:19 crc kubenswrapper[4858]: E1122 07:13:19.925881 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.425870665 +0000 UTC m=+162.267293671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.940029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csdf\" (UniqueName: \"kubernetes.io/projected/a2875d04-e612-4175-befc-3a62d7ea9daf-kube-api-access-2csdf\") pod \"service-ca-operator-777779d784-nwq72\" (UID: \"a2875d04-e612-4175-befc-3a62d7ea9daf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.960316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv7vz\" (UniqueName: \"kubernetes.io/projected/85fba749-4c9a-41e6-b7f2-eb1f82777f1a-kube-api-access-hv7vz\") pod \"packageserver-d55dfcdfc-nzt8k\" (UID: \"85fba749-4c9a-41e6-b7f2-eb1f82777f1a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.962697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.972786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:19 crc kubenswrapper[4858]: W1122 07:13:19.975658 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d6bdadf_9903_44ef_b008_4f3864e83bb4.slice/crio-ed39aaef360dec30fc3e0385800d81db0ed0befe32f274960799e5deb680eb3a WatchSource:0}: Error finding container ed39aaef360dec30fc3e0385800d81db0ed0befe32f274960799e5deb680eb3a: Status 404 returned error can't find the container with id ed39aaef360dec30fc3e0385800d81db0ed0befe32f274960799e5deb680eb3a Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.980212 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" Nov 22 07:13:19 crc kubenswrapper[4858]: W1122 07:13:19.994184 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd747063_e6a4_407d_b448_f5f9197f3f3a.slice/crio-81ef31e0345a6e5fdfd48d105595308425dfd820d0b989bee006e9ebbfa56cad WatchSource:0}: Error finding container 81ef31e0345a6e5fdfd48d105595308425dfd820d0b989bee006e9ebbfa56cad: Status 404 returned error can't find the container with id 81ef31e0345a6e5fdfd48d105595308425dfd820d0b989bee006e9ebbfa56cad Nov 22 07:13:19 crc kubenswrapper[4858]: I1122 07:13:19.997801 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.002895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vd77\" (UniqueName: \"kubernetes.io/projected/e2051416-ad0d-472b-bcf1-b3b236ab3c6e-kube-api-access-7vd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-8ptsp\" (UID: \"e2051416-ad0d-472b-bcf1-b3b236ab3c6e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.003257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9kg\" (UniqueName: \"kubernetes.io/projected/d21a3b57-d4c4-474a-8be9-75c59a385d92-kube-api-access-9f9kg\") pod \"olm-operator-6b444d44fb-76fn7\" (UID: \"d21a3b57-d4c4-474a-8be9-75c59a385d92\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.009988 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.015974 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.024983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75bg\" (UniqueName: \"kubernetes.io/projected/791e360a-39d1-48f6-9e9e-2e768a1710ad-kube-api-access-r75bg\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jl4s\" (UID: \"791e360a-39d1-48f6-9e9e-2e768a1710ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.025928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.026109 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.526080064 +0000 UTC m=+162.367503070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.027559 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.027856 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.52784156 +0000 UTC m=+162.369264566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.050066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf6j7\" (UniqueName: \"kubernetes.io/projected/556a50cc-cb65-4fb5-bee5-a88cbfd40341-kube-api-access-qf6j7\") pod \"package-server-manager-789f6589d5-ngf52\" (UID: \"556a50cc-cb65-4fb5-bee5-a88cbfd40341\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.063723 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.064248 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.070521 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.082633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.090767 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.109082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.122974 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gx8tl" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.131725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.132037 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.632022783 +0000 UTC m=+162.473445789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.148326 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.162573 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.233596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.234224 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.734211504 +0000 UTC m=+162.575634510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.258198 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-znzs2"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.288898 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.326757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bgn27" event={"ID":"e79f7ebf-0dac-4f86-b3f1-045904313fba","Type":"ContainerStarted","Data":"b26dddd69ba5e5428947ffbe4a86b1fd85bf9bca28cdda4b6574d4d6ec22f51f"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.327983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" event={"ID":"82c86211-6b1e-41e0-80b6-898aec0123a3","Type":"ContainerStarted","Data":"f448c3bd8ced001bce7525391a3f6987e6f4a2797114eb9e97576b6cd80f16f0"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.328783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" event={"ID":"593b796b-f4d8-4c80-b84f-38f74cfbd37b","Type":"ContainerStarted","Data":"91509f40d9ff95ff2e578f71567205717d0abb2a2b6add4c69d30a4dbbc1205e"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.329733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" event={"ID":"8d6bdadf-9903-44ef-b008-4f3864e83bb4","Type":"ContainerStarted","Data":"ed39aaef360dec30fc3e0385800d81db0ed0befe32f274960799e5deb680eb3a"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.330575 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" event={"ID":"a148e8ff-7b01-4625-b5db-76eec5c1469e","Type":"ContainerStarted","Data":"973685ec89d01b2126e8da32bf3f8b461dbb63532a56aa799612cfe882ab6277"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.331931 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" event={"ID":"8c84d713-f4f9-4968-a086-95187d89c9c1","Type":"ContainerStarted","Data":"ac0a9bdea962457d2f9daaa13e89d6560727b4e181bd1f1011f2eb3f95ff257e"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.338874 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.339053 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.839027518 +0000 UTC m=+162.680450524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.339251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.339665 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.839649628 +0000 UTC m=+162.681072674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.340960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" event={"ID":"d6c7906f-ca7f-4b22-ab70-b38aad08121f","Type":"ContainerStarted","Data":"0357d1b2bf16865187e4e0f135fcfea1c13f0a1fb1b1f6dae23a015237776288"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.342342 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xllks" event={"ID":"dba0e5a3-9474-4dde-a3c3-52390b657290","Type":"ContainerStarted","Data":"8ea639579ffa9d078b35cf9848f0c0041c956c2659c0a32c5f38df9f47301640"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.343496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" event={"ID":"55aeedcc-4db9-4ba7-87d3-bae650dc8af0","Type":"ContainerStarted","Data":"a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.343543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" event={"ID":"55aeedcc-4db9-4ba7-87d3-bae650dc8af0","Type":"ContainerStarted","Data":"2f9b7c7a369be9dcd1f1efff18877a4dac6ac6a375f59f0606c3afca066d1364"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.348378 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2scn9" event={"ID":"c36ffc53-b39c-4fff-b40e-0e618701060a","Type":"ContainerStarted","Data":"ed1ae1648ae2ee6fe8ae6ea90732091c12e98849fd3dd976aaa2692616f428ba"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.348427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2scn9" event={"ID":"c36ffc53-b39c-4fff-b40e-0e618701060a","Type":"ContainerStarted","Data":"063cf856c7a5f24c18cf72a2a02749ba3e05450404a6c172dc2a6c55beec5ca3"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.351260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" event={"ID":"005c94b6-beb8-49e1-93e2-119bc01cd795","Type":"ContainerStarted","Data":"68b968a90cb75976188f3abece55b874a7fc9ebbe97a0522242f3862b27b1e8a"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.352173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" event={"ID":"efcd6f6a-7dd3-426d-9e27-b991c98b47a4","Type":"ContainerStarted","Data":"d2ffe1db14a410f647f1f9c8b9762f235669bdc937b92505acb723ae4b1c2325"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.352822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" event={"ID":"55bbbc0a-48ea-4633-b49b-3869f873c64f","Type":"ContainerStarted","Data":"48a45f8a26a1c17f4ef309021847fff7748144da9eba9a0df31ce115a19c7780"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.353439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" event={"ID":"62cf2e91-277d-4243-93f5-7cc9416f3f6e","Type":"ContainerStarted","Data":"880a985d92139278cf4e4daf4ee4461caaa14bb56767eb18ebb7d5b99f989a30"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.354616 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" event={"ID":"4322806d-7a81-49aa-9e44-638c6cab8e57","Type":"ContainerStarted","Data":"368024024b69e4816de525fe4d0d52e338076555aec413263f306bc6ff946b65"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.355683 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" event={"ID":"55c7556d-4740-4be7-bc47-f81c4c7374c6","Type":"ContainerStarted","Data":"bc6fb93e7ee40e4077a857f82ad4b36a00b1d86e53925327c33df56fa1b3657a"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.355711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" event={"ID":"55c7556d-4740-4be7-bc47-f81c4c7374c6","Type":"ContainerStarted","Data":"f0d08d023cd76a8c685d976d1320e725b0b6d43d7d65f03cef839123968effce"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.359523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gtcln" event={"ID":"6af73c1f-5d33-4e17-8331-61cf5b084487","Type":"ContainerStarted","Data":"095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.359577 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gtcln" event={"ID":"6af73c1f-5d33-4e17-8331-61cf5b084487","Type":"ContainerStarted","Data":"52e05b837c3954dc482fd1ce004877735a867a11e7c546d2601e66993682c3d4"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.360819 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" event={"ID":"bd747063-e6a4-407d-b448-f5f9197f3f3a","Type":"ContainerStarted","Data":"81ef31e0345a6e5fdfd48d105595308425dfd820d0b989bee006e9ebbfa56cad"} Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.361589 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.388653 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t589f"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.390686 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.440229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.440410 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.940384813 +0000 UTC m=+162.781807829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.440669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.440987 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.940978782 +0000 UTC m=+162.782401788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: W1122 07:13:20.534343 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4458c793_c7d2_400a_876a_2724099c5c3a.slice/crio-e7b1cb686bcfbe71d5760d1c2f4f0681bad36659af1722e576a3e753d0d9cb51 WatchSource:0}: Error finding container e7b1cb686bcfbe71d5760d1c2f4f0681bad36659af1722e576a3e753d0d9cb51: Status 404 returned error can't find the container with id e7b1cb686bcfbe71d5760d1c2f4f0681bad36659af1722e576a3e753d0d9cb51 Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.541153 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.541294 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.041272293 +0000 UTC m=+162.882695299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.541489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.541943 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.041925134 +0000 UTC m=+162.883348140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: W1122 07:13:20.555552 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a5d871_647f_4486_8bb9_14e65650c259.slice/crio-312a6b4254f8ebcc30cece1af21724c9b88b0354fcb295f62f7e14a8b9bd3f5c WatchSource:0}: Error finding container 312a6b4254f8ebcc30cece1af21724c9b88b0354fcb295f62f7e14a8b9bd3f5c: Status 404 returned error can't find the container with id 312a6b4254f8ebcc30cece1af21724c9b88b0354fcb295f62f7e14a8b9bd3f5c Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.589641 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s7s7n"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.629644 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kgzjd"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.642836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.643119 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.143104853 +0000 UTC m=+162.984527859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.744387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.744688 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.244674015 +0000 UTC m=+163.086097021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: W1122 07:13:20.791193 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9775c1d5_6743_456e_91b6_dc0ef2f4f5cb.slice/crio-9aa4752b53c6058c748f00c45bd446ecb4b3e07b35d6cde9f05f658188e0cdd0 WatchSource:0}: Error finding container 9aa4752b53c6058c748f00c45bd446ecb4b3e07b35d6cde9f05f658188e0cdd0: Status 404 returned error can't find the container with id 9aa4752b53c6058c748f00c45bd446ecb4b3e07b35d6cde9f05f658188e0cdd0 Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.844916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.845091 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.34506994 +0000 UTC m=+163.186492946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.845629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.846063 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.34605484 +0000 UTC m=+163.187477846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.886397 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nwq72"] Nov 22 07:13:20 crc kubenswrapper[4858]: I1122 07:13:20.948166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4858]: E1122 07:13:20.948575 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.448559682 +0000 UTC m=+163.289982688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.052918 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.053501 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.553480048 +0000 UTC m=+163.394903114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.147261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6bl5t"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.154155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.154311 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.654289516 +0000 UTC m=+163.495712522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.154732 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.155015 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.655006539 +0000 UTC m=+163.496429545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.158347 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.173145 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75"] Nov 22 07:13:21 crc kubenswrapper[4858]: W1122 07:13:21.220207 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb9f615_dc32_4f01_884b_db24dfb05c34.slice/crio-8a76609ffb1d1883eca762b94b3b2d7f8a1ea0a3a9686d337f53b709224c2f27 WatchSource:0}: Error finding container 8a76609ffb1d1883eca762b94b3b2d7f8a1ea0a3a9686d337f53b709224c2f27: Status 404 returned error can't find the container with id 8a76609ffb1d1883eca762b94b3b2d7f8a1ea0a3a9686d337f53b709224c2f27 Nov 22 07:13:21 crc kubenswrapper[4858]: W1122 07:13:21.251933 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6959fc44_3e32_4848_85cf_963d3f7e8c16.slice/crio-c056996625bd7e4d0412aad0213312fc16a2ed3f9c0dee7e8e2822d14dd6bf01 WatchSource:0}: Error finding container c056996625bd7e4d0412aad0213312fc16a2ed3f9c0dee7e8e2822d14dd6bf01: Status 404 returned error can't find the container with id c056996625bd7e4d0412aad0213312fc16a2ed3f9c0dee7e8e2822d14dd6bf01 Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.259821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.260276 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.760256836 +0000 UTC m=+163.601679842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.307719 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.336458 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.338588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.360858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.360906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.360987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.361296 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.861283031 +0000 UTC m=+163.702706037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.362463 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.372916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gx8tl" event={"ID":"d33abf93-ea42-48e0-82e1-6afc9c4542ca","Type":"ContainerStarted","Data":"31cda02d9fd177736141f12bde83bc1c24a9b72d58a0b4ca8d6cf9b16f0b8c75"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.374145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.379762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" event={"ID":"6959fc44-3e32-4848-85cf-963d3f7e8c16","Type":"ContainerStarted","Data":"c056996625bd7e4d0412aad0213312fc16a2ed3f9c0dee7e8e2822d14dd6bf01"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.383588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" event={"ID":"c93abfad-9450-43d5-824c-7ff52c2a613b","Type":"ContainerStarted","Data":"712a1dc72b7c9db9dfbad2920016aa31ab8f72389a293bbedb889390f6051528"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.385250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bgn27" event={"ID":"e79f7ebf-0dac-4f86-b3f1-045904313fba","Type":"ContainerStarted","Data":"5557b554071f07f0d5955467629f394decbe5255d450690818527409c0156ae5"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.385417 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.387732 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.387786 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.388670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" event={"ID":"f9a5d871-647f-4486-8bb9-14e65650c259","Type":"ContainerStarted","Data":"312a6b4254f8ebcc30cece1af21724c9b88b0354fcb295f62f7e14a8b9bd3f5c"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.391380 4858 generic.go:334] "Generic (PLEG): container finished" podID="82c86211-6b1e-41e0-80b6-898aec0123a3" containerID="d5931f029bad257d2c7b07a96f7c86e852d803721f62967a1dd48bd475ccd34b" exitCode=0 Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.391428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" event={"ID":"82c86211-6b1e-41e0-80b6-898aec0123a3","Type":"ContainerDied","Data":"d5931f029bad257d2c7b07a96f7c86e852d803721f62967a1dd48bd475ccd34b"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.394591 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" event={"ID":"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb","Type":"ContainerStarted","Data":"9aa4752b53c6058c748f00c45bd446ecb4b3e07b35d6cde9f05f658188e0cdd0"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.398499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" event={"ID":"a148e8ff-7b01-4625-b5db-76eec5c1469e","Type":"ContainerStarted","Data":"e9ac90428a294161d7eda34a63be2efdc6ae87c08a1091a6a7bc51c57ede5574"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.410146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" event={"ID":"5ce09591-28f0-4ab0-956d-694afdddaa86","Type":"ContainerStarted","Data":"e1dc7a64082b87547b9783c2fc64e75680c84f9af7a9c89a79f5daf9ffa37d1f"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.412242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" event={"ID":"55bbbc0a-48ea-4633-b49b-3869f873c64f","Type":"ContainerStarted","Data":"669b8391ff6cb63736dc67be6fd8b1118d4c58ed5bdfe0380857f62c0344d8ad"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.413674 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" event={"ID":"7cb9f615-dc32-4f01-884b-db24dfb05c34","Type":"ContainerStarted","Data":"8a76609ffb1d1883eca762b94b3b2d7f8a1ea0a3a9686d337f53b709224c2f27"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.428657 4858 generic.go:334] "Generic (PLEG): container finished" podID="005c94b6-beb8-49e1-93e2-119bc01cd795" containerID="03c4ae6851fd679b05435a069d439f9541de9536f17b467f55d081ad71657e0f" exitCode=0 Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.428729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" event={"ID":"005c94b6-beb8-49e1-93e2-119bc01cd795","Type":"ContainerDied","Data":"03c4ae6851fd679b05435a069d439f9541de9536f17b467f55d081ad71657e0f"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.435869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" event={"ID":"2b46523c-2f95-4946-b4e3-f7869cdda903","Type":"ContainerStarted","Data":"0085e058d55d6c3345087bf0090c7a8f2b2520e139357b19381fd30fdc4dfdf5"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.438522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" event={"ID":"62cf2e91-277d-4243-93f5-7cc9416f3f6e","Type":"ContainerStarted","Data":"5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.439954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" event={"ID":"cec4f090-10a6-4206-b4a1-3876d40a0d4b","Type":"ContainerStarted","Data":"40fae7672344cbfe43d7e38e74ccda72f90ee9daa639c7a3e00dcced84561dbc"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.441436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" event={"ID":"a2875d04-e612-4175-befc-3a62d7ea9daf","Type":"ContainerStarted","Data":"f5b8dcaa329565b25cafd31306b6f1c0c1586d6a8cf6a61f85c4bfcc7bdd5842"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.443931 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" event={"ID":"4458c793-c7d2-400a-876a-2724099c5c3a","Type":"ContainerStarted","Data":"e7b1cb686bcfbe71d5760d1c2f4f0681bad36659af1722e576a3e753d0d9cb51"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.453283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s7s7n" event={"ID":"29e8f1af-f22c-44df-9493-8886f7d4045b","Type":"ContainerStarted","Data":"2fc71a91b070f21f0b7bece5c2a84582bcb1f4a7192244653719e26c99747f8e"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.458018 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.459528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t589f" event={"ID":"dc728b9f-ad18-4c6a-927a-b6670e080ad9","Type":"ContainerStarted","Data":"b59e4c56d03614c35bcd478669743c4c8b3b4fbb16e0004ec867a0d5bef18cca"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.462186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.462762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.462887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.463875 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" event={"ID":"efcd6f6a-7dd3-426d-9e27-b991c98b47a4","Type":"ContainerStarted","Data":"43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba"} Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.464001 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.963977328 +0000 UTC m=+163.805400334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.464627 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bgn27" podStartSLOduration=127.464570676 podStartE2EDuration="2m7.464570676s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.457798283 +0000 UTC m=+163.299221299" watchObservedRunningTime="2025-11-22 07:13:21.464570676 +0000 UTC m=+163.305993682" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.468862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" event={"ID":"593b796b-f4d8-4c80-b84f-38f74cfbd37b","Type":"ContainerStarted","Data":"2d74512a7aed5f4c70f0d52156aca83f5e501bf3c2e72b49e399a411d961ba40"} Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.468909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.469134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.470108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.471208 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m8h8z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.471241 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.471253 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-2scn9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.471290 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2scn9" podUID="c36ffc53-b39c-4fff-b40e-0e618701060a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.471767 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.471852 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-55rnx"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.473053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.499443 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5bh77" podStartSLOduration=127.499424535 podStartE2EDuration="2m7.499424535s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.498460545 +0000 UTC m=+163.339883571" watchObservedRunningTime="2025-11-22 07:13:21.499424535 +0000 UTC m=+163.340847541" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.551633 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.556672 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qbgwx"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.562292 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xsjfg"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.564118 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.568836 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.068819683 +0000 UTC m=+163.910242689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.572408 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k"] Nov 22 07:13:21 crc kubenswrapper[4858]: W1122 07:13:21.573382 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfebf775b_8c73_4ff1_99a0_ef53e4f20cd1.slice/crio-dcdd8a4593ed4399a007e729e070297c15f2d6cb216dc63740b58a33a4127c56 WatchSource:0}: Error finding container dcdd8a4593ed4399a007e729e070297c15f2d6cb216dc63740b58a33a4127c56: Status 404 returned error can't find the container with id dcdd8a4593ed4399a007e729e070297c15f2d6cb216dc63740b58a33a4127c56 Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.601195 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" podStartSLOduration=126.601179383 podStartE2EDuration="2m6.601179383s" podCreationTimestamp="2025-11-22 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.600473921 +0000 UTC m=+163.441896947" watchObservedRunningTime="2025-11-22 07:13:21.601179383 +0000 UTC m=+163.442602399" Nov 22 07:13:21 crc kubenswrapper[4858]: W1122 07:13:21.613653 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85fba749_4c9a_41e6_b7f2_eb1f82777f1a.slice/crio-37d8fb6768945ea7a7091a192222d06728e42719796f6900ec5efcab0c3e7362 WatchSource:0}: Error finding container 37d8fb6768945ea7a7091a192222d06728e42719796f6900ec5efcab0c3e7362: Status 404 returned error can't find the container with id 37d8fb6768945ea7a7091a192222d06728e42719796f6900ec5efcab0c3e7362 Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.625515 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s"] Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.649830 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.665575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.665786 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.165754118 +0000 UTC m=+164.007177124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.665963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.666377 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.166369067 +0000 UTC m=+164.007792073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.724320 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gtcln" podStartSLOduration=127.724299544 podStartE2EDuration="2m7.724299544s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.721883747 +0000 UTC m=+163.563306753" watchObservedRunningTime="2025-11-22 07:13:21.724299544 +0000 UTC m=+163.565722550" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.750737 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.767312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.767474 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.267452024 +0000 UTC m=+164.108875040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.768023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.768388 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.268376423 +0000 UTC m=+164.109799429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.838827 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2scn9" podStartSLOduration=127.838806273 podStartE2EDuration="2m7.838806273s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.837381658 +0000 UTC m=+163.678804684" watchObservedRunningTime="2025-11-22 07:13:21.838806273 +0000 UTC m=+163.680229279" Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.869290 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.870256 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.370228693 +0000 UTC m=+164.211651699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.870852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.871200 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.371191924 +0000 UTC m=+164.212614930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4858]: I1122 07:13:21.971870 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4858]: E1122 07:13:21.972202 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.472185167 +0000 UTC m=+164.313608173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.074254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.075065 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.575047169 +0000 UTC m=+164.416470175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: W1122 07:13:22.133509 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-08b1f547ee6e702dbc0570346a79b7c81f5a5bb3554ee50d21f1d087dd913796 WatchSource:0}: Error finding container 08b1f547ee6e702dbc0570346a79b7c81f5a5bb3554ee50d21f1d087dd913796: Status 404 returned error can't find the container with id 08b1f547ee6e702dbc0570346a79b7c81f5a5bb3554ee50d21f1d087dd913796 Nov 22 07:13:22 crc kubenswrapper[4858]: W1122 07:13:22.167816 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-cf565987d062fb45b4edb86c82675900259c2198e361eee9812777d5d0e577ec WatchSource:0}: Error finding container cf565987d062fb45b4edb86c82675900259c2198e361eee9812777d5d0e577ec: Status 404 returned error can't find the container with id cf565987d062fb45b4edb86c82675900259c2198e361eee9812777d5d0e577ec Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.175639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.176069 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.676031863 +0000 UTC m=+164.517454889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.277123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.277496 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.777484511 +0000 UTC m=+164.618907517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.378151 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.378347 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.878307629 +0000 UTC m=+164.719730635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.378678 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.378971 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.87896234 +0000 UTC m=+164.720385346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.476123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" event={"ID":"791e360a-39d1-48f6-9e9e-2e768a1710ad","Type":"ContainerStarted","Data":"a76fd4bade8c2a28c95a4978bfbb51aa9d1f3c8c9f537da31b5857ce86d14e37"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.477582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" event={"ID":"febf775b-8c73-4ff1-99a0-ef53e4f20cd1","Type":"ContainerStarted","Data":"dcdd8a4593ed4399a007e729e070297c15f2d6cb216dc63740b58a33a4127c56"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.478998 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" event={"ID":"e2051416-ad0d-472b-bcf1-b3b236ab3c6e","Type":"ContainerStarted","Data":"f64c04cc7b695d0e8654596fa87465b90f137067491b89cc266a968338e1aff1"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.479420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.479580 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.979564391 +0000 UTC m=+164.820987397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.479633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.479911 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.979904531 +0000 UTC m=+164.821327537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.480435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" event={"ID":"53f7564e-935e-41b1-bf5a-58d1d509a014","Type":"ContainerStarted","Data":"b53292670baf844633c585cf67ccad0d8cbdc375d902f0e16c06c171f552d38b"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.482899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" event={"ID":"2b46523c-2f95-4946-b4e3-f7869cdda903","Type":"ContainerStarted","Data":"95ed887c3942696307b9e2e03493593934d2327b6bbf4ddf3b610680de284d8b"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.484417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" event={"ID":"bd747063-e6a4-407d-b448-f5f9197f3f3a","Type":"ContainerStarted","Data":"4b282fd70f5d000378d93a73f36a4d485d35a716f9400e7140bf5c71dbcae32a"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.485626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" event={"ID":"9775c1d5-6743-456e-91b6-dc0ef2f4f5cb","Type":"ContainerStarted","Data":"9652ec7c970cea983ac8806a7d2a16ad3c3714b3dcfe4cfc79168f4cc33b7339"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.486759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" event={"ID":"d21a3b57-d4c4-474a-8be9-75c59a385d92","Type":"ContainerStarted","Data":"4437d6eedb190c97b1bee8dc56fdcb33f737e4ae105e71736fcc44ea8f0f3520"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.487685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ac8dc474199dfef7ea75ba120605fd03be89eb2af02c0b88c2b1ce97bd5c89c3"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.489146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" event={"ID":"556a50cc-cb65-4fb5-bee5-a88cbfd40341","Type":"ContainerStarted","Data":"09b4e31d9785cd71a9cb005f8ce722e0a81d8859fb15e6dda172e873333f3963"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.490389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" event={"ID":"c93abfad-9450-43d5-824c-7ff52c2a613b","Type":"ContainerStarted","Data":"5375372e336e96aed28fc7f540fd236d88d5d8cf1572f0abfb4810044d341af7"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.492002 4858 generic.go:334] "Generic (PLEG): container finished" podID="593b796b-f4d8-4c80-b84f-38f74cfbd37b" containerID="2d74512a7aed5f4c70f0d52156aca83f5e501bf3c2e72b49e399a411d961ba40" exitCode=0 Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.492066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" event={"ID":"593b796b-f4d8-4c80-b84f-38f74cfbd37b","Type":"ContainerDied","Data":"2d74512a7aed5f4c70f0d52156aca83f5e501bf3c2e72b49e399a411d961ba40"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.494120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cf565987d062fb45b4edb86c82675900259c2198e361eee9812777d5d0e577ec"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.495607 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" event={"ID":"85fba749-4c9a-41e6-b7f2-eb1f82777f1a","Type":"ContainerStarted","Data":"37d8fb6768945ea7a7091a192222d06728e42719796f6900ec5efcab0c3e7362"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.497423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xsjfg" event={"ID":"5555fba1-f639-4d9b-992b-203cc9a88938","Type":"ContainerStarted","Data":"eedf3b4a8cc1aa74424822cd7ee8ddd095d5df6c8d48ce705391e9b2768f6501"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.498920 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" event={"ID":"324dfa6f-8489-4776-b7bf-9bf6a489478b","Type":"ContainerStarted","Data":"87bba90ac221b8b261521befa4c8fa08b1a1eebd3c6c10981daa80508cb050c1"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.500003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"08b1f547ee6e702dbc0570346a79b7c81f5a5bb3554ee50d21f1d087dd913796"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.502436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" event={"ID":"779f6711-3ae2-4fdf-b7a2-8755c5373eb3","Type":"ContainerStarted","Data":"3f52add9292688e4728052fb7e0b7c7615e7d3291015505d71f0d5de7f25fcb4"} Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.503997 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.504057 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.504171 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m8h8z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.504237 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.505172 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-2scn9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.505280 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2scn9" podUID="c36ffc53-b39c-4fff-b40e-0e618701060a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.557775 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" podStartSLOduration=128.557753195 podStartE2EDuration="2m8.557753195s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.554365308 +0000 UTC m=+164.395788324" watchObservedRunningTime="2025-11-22 07:13:22.557753195 +0000 UTC m=+164.399176201" Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.578206 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jm22m" podStartSLOduration=128.578182199 podStartE2EDuration="2m8.578182199s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.569731723 +0000 UTC m=+164.411154759" watchObservedRunningTime="2025-11-22 07:13:22.578182199 +0000 UTC m=+164.419605205" Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.583540 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.587124 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.087094389 +0000 UTC m=+164.928517555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.596306 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" podStartSLOduration=128.59628589 podStartE2EDuration="2m8.59628589s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.595824395 +0000 UTC m=+164.437247411" watchObservedRunningTime="2025-11-22 07:13:22.59628589 +0000 UTC m=+164.437708896" Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.690493 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.690962 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.190942363 +0000 UTC m=+165.032365369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.791822 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.792253 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.292230976 +0000 UTC m=+165.133653982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.893576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.894124 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.394112138 +0000 UTC m=+165.235535144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.994653 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.994867 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.494833653 +0000 UTC m=+165.336256659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4858]: I1122 07:13:22.995256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:22 crc kubenswrapper[4858]: E1122 07:13:22.995742 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.49571332 +0000 UTC m=+165.337136326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.097115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.097257 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.59723689 +0000 UTC m=+165.438659896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.097440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.097728 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.597719765 +0000 UTC m=+165.439142771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.198519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.198719 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.698693378 +0000 UTC m=+165.540116384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.198805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.199116 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.699108341 +0000 UTC m=+165.540531347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.299360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.299507 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.799485525 +0000 UTC m=+165.640908531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.300014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.300438 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.800418065 +0000 UTC m=+165.641841071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.400608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.400788 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.900764388 +0000 UTC m=+165.742187394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.400901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.401184 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.901171231 +0000 UTC m=+165.742594237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.502390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.502722 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.00269592 +0000 UTC m=+165.844118926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.503086 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.503424 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.003409733 +0000 UTC m=+165.844832739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.520612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" event={"ID":"8d6bdadf-9903-44ef-b008-4f3864e83bb4","Type":"ContainerStarted","Data":"e1e339d8a46173a4292fa99d13c9dfbf5529e87e660fdcedaa9a7798cd63a24a"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.522223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" event={"ID":"85fba749-4c9a-41e6-b7f2-eb1f82777f1a","Type":"ContainerStarted","Data":"75673361a55d7294be3b359bacb10bb1d79a2ff14baee6acaf9586eee284add1"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.524500 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xsjfg" event={"ID":"5555fba1-f639-4d9b-992b-203cc9a88938","Type":"ContainerStarted","Data":"b66b7abe64c82e632ec417d14c17540573ad8dfc846715e6a0e0e1dc05547324"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.527957 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" event={"ID":"a2875d04-e612-4175-befc-3a62d7ea9daf","Type":"ContainerStarted","Data":"763d7efa66512f8956ba21ded257cfd7d3ef85c8ab9b72c3b3f3640bf002fed8"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.529757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9a178bffd189eefdd0c7c49bfe3ff3dc84af170bf4e73f061028cf379d88d005"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.531902 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" event={"ID":"4322806d-7a81-49aa-9e44-638c6cab8e57","Type":"ContainerStarted","Data":"05d5194a9b21aa272e24db94bd208a9f7356dde9fad042a25d7eed876fce32ee"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.533626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" event={"ID":"e2051416-ad0d-472b-bcf1-b3b236ab3c6e","Type":"ContainerStarted","Data":"d8ef6c43b37dcb8cab7f1090c1e2ddc170069c4aef1360d42bbc3abf0bd48dde"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.552023 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwq72" podStartSLOduration=128.552005735 podStartE2EDuration="2m8.552005735s" podCreationTimestamp="2025-11-22 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.547655638 +0000 UTC m=+165.389078644" watchObservedRunningTime="2025-11-22 07:13:23.552005735 +0000 UTC m=+165.393428741" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.558052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" event={"ID":"7cb9f615-dc32-4f01-884b-db24dfb05c34","Type":"ContainerStarted","Data":"8414a7ddb1a889f4c7b1768708a340efe0869c69e66f4e76dbef7f463c63e033"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.563861 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" event={"ID":"f9a5d871-647f-4486-8bb9-14e65650c259","Type":"ContainerStarted","Data":"264c56d34fdc82e74fa3fdfe32abf3143bbff717b7c023bd0526d55f64e7537c"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.577377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gx8tl" event={"ID":"d33abf93-ea42-48e0-82e1-6afc9c4542ca","Type":"ContainerStarted","Data":"37aa6d47f4c2b589179de43f73d05cf25558be219243ffd91adf90eb9ba3b184"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.578587 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-28jfb" podStartSLOduration=129.578571952 podStartE2EDuration="2m9.578571952s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.575901088 +0000 UTC m=+165.417324094" watchObservedRunningTime="2025-11-22 07:13:23.578571952 +0000 UTC m=+165.419994958" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.592290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s7s7n" event={"ID":"29e8f1af-f22c-44df-9493-8886f7d4045b","Type":"ContainerStarted","Data":"29983fb2e7fdb740ecdb0f9856813672148a29e1484c205436d39e6a161ea202"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.602997 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gx8tl" podStartSLOduration=6.602974092 podStartE2EDuration="6.602974092s" podCreationTimestamp="2025-11-22 07:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.602457726 +0000 UTC m=+165.443880752" watchObservedRunningTime="2025-11-22 07:13:23.602974092 +0000 UTC m=+165.444397108" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.603381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" event={"ID":"8c84d713-f4f9-4968-a086-95187d89c9c1","Type":"ContainerStarted","Data":"466bc048e7a9a6e5870ceba1b3552beff5ee1212a6485ab43749a6ea4cd66702"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.603895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.605068 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.105054877 +0000 UTC m=+165.946477883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.615207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t589f" event={"ID":"dc728b9f-ad18-4c6a-927a-b6670e080ad9","Type":"ContainerStarted","Data":"cc17526e3c6ce0d222988bbffdbbc2882828d8c493796ebca3a48682a2461b00"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.622739 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s7s7n" podStartSLOduration=7.622719334 podStartE2EDuration="7.622719334s" podCreationTimestamp="2025-11-22 07:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.622029353 +0000 UTC m=+165.463452379" watchObservedRunningTime="2025-11-22 07:13:23.622719334 +0000 UTC m=+165.464142350" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.642743 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" event={"ID":"779f6711-3ae2-4fdf-b7a2-8755c5373eb3","Type":"ContainerStarted","Data":"8aead54edab6636a6c02213235b4ac97447538ae82c401e0f26ae8dbaa315b13"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.648819 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" event={"ID":"556a50cc-cb65-4fb5-bee5-a88cbfd40341","Type":"ContainerStarted","Data":"4240d349b4862cfa9e2fb16ae4ca5e37390ff99a962259b782d98deeca0c4437"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.650774 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t589f" podStartSLOduration=128.650753127 podStartE2EDuration="2m8.650753127s" podCreationTimestamp="2025-11-22 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.64893308 +0000 UTC m=+165.490356116" watchObservedRunningTime="2025-11-22 07:13:23.650753127 +0000 UTC m=+165.492176133" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.681414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" event={"ID":"d6c7906f-ca7f-4b22-ab70-b38aad08121f","Type":"ContainerStarted","Data":"9b6498bb3bdf93869b08b7545589c66b49510506ce6bf74ff703d061bb2bfde6"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.706998 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f8221eed7df95fd284a5c3a4ee93c1a436bd04c1b1fb03e9f4c4b24f99c83bef"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.707378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.708007 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-88dfg" podStartSLOduration=130.707993802 podStartE2EDuration="2m10.707993802s" podCreationTimestamp="2025-11-22 07:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.706878376 +0000 UTC m=+165.548301382" watchObservedRunningTime="2025-11-22 07:13:23.707993802 +0000 UTC m=+165.549416798" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.709059 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.209043815 +0000 UTC m=+166.050466831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.717011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" event={"ID":"5ce09591-28f0-4ab0-956d-694afdddaa86","Type":"ContainerStarted","Data":"e8f8b54ad2fbd1ce477d794e588ddbdd1d1236f382d355da6726eef95bbe78b9"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.735656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" event={"ID":"791e360a-39d1-48f6-9e9e-2e768a1710ad","Type":"ContainerStarted","Data":"b0f4bc828100725b265cfc9c2516166ede018de659a57ab08f3ff07dc32b3489"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.747496 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-slbpn" podStartSLOduration=129.747477097 podStartE2EDuration="2m9.747477097s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.746044371 +0000 UTC m=+165.587467377" watchObservedRunningTime="2025-11-22 07:13:23.747477097 +0000 UTC m=+165.588900103" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.748670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xllks" event={"ID":"dba0e5a3-9474-4dde-a3c3-52390b657290","Type":"ContainerStarted","Data":"583d27a331b6828001a5970d11e9442179cb3e9d6036f13f10c55b9d9480da65"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.762123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" event={"ID":"febf775b-8c73-4ff1-99a0-ef53e4f20cd1","Type":"ContainerStarted","Data":"9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.774444 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xllks" podStartSLOduration=129.774430336 podStartE2EDuration="2m9.774430336s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.772386832 +0000 UTC m=+165.613809858" watchObservedRunningTime="2025-11-22 07:13:23.774430336 +0000 UTC m=+165.615853342" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.798089 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" event={"ID":"005c94b6-beb8-49e1-93e2-119bc01cd795","Type":"ContainerStarted","Data":"c65876e1e66909b8472774997f99b341284bfd31304a77e17a39017760866ff8"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.810532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" event={"ID":"cec4f090-10a6-4206-b4a1-3876d40a0d4b","Type":"ContainerStarted","Data":"c2b08dcec23f379eab4787b9431f10768c24026b2af11d608af7ae3563f5a1e9"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.811302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.811584 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.311540726 +0000 UTC m=+166.152963732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.812079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.812702 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.312694433 +0000 UTC m=+166.154117439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.821015 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" event={"ID":"d21a3b57-d4c4-474a-8be9-75c59a385d92","Type":"ContainerStarted","Data":"5f6bddd4a7299749f0e6db8b24f1b85be80bdbfa3093602cfaa87aec5fb3e8dc"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.821645 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.823690 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-76fn7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.823730 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" podUID="d21a3b57-d4c4-474a-8be9-75c59a385d92" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.832203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9a310c28cccc7b6eab3e358b6f5838b69ea508f6a86ba2b88d8843b294b535dc"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.833687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" event={"ID":"6959fc44-3e32-4848-85cf-963d3f7e8c16","Type":"ContainerStarted","Data":"3296c726e3e4ad46b963e85253c7621813e81caec3812632a3cefa0239ea46db"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.834685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" event={"ID":"4458c793-c7d2-400a-876a-2724099c5c3a","Type":"ContainerStarted","Data":"75680dbf31658eee7255f45ee5e91b0a2e274cab753fba9312e5b74eb456ef05"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.836181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" event={"ID":"324dfa6f-8489-4776-b7bf-9bf6a489478b","Type":"ContainerStarted","Data":"1467dc3e0d73188e6a54a56d83ebc18a3925fc8e5fbff3fe0a36d4ed7ff97dbe"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.838025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" event={"ID":"55c7556d-4740-4be7-bc47-f81c4c7374c6","Type":"ContainerStarted","Data":"594df0fedfb74783da6b86380577840e7ac857859882c7cf36792bdfa0113f17"} Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.861233 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" podStartSLOduration=129.861213191 podStartE2EDuration="2m9.861213191s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.843809653 +0000 UTC m=+165.685232679" watchObservedRunningTime="2025-11-22 07:13:23.861213191 +0000 UTC m=+165.702636197" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.886900 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kgzjd" podStartSLOduration=129.886881571 podStartE2EDuration="2m9.886881571s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.862765351 +0000 UTC m=+165.704188357" watchObservedRunningTime="2025-11-22 07:13:23.886881571 +0000 UTC m=+165.728304577" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.912915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4858]: E1122 07:13:23.914939 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.414919424 +0000 UTC m=+166.256342430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.922244 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jj8h6" podStartSLOduration=129.922229565 podStartE2EDuration="2m9.922229565s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.893732087 +0000 UTC m=+165.735155193" watchObservedRunningTime="2025-11-22 07:13:23.922229565 +0000 UTC m=+165.763652571" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.924643 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hs8qj" podStartSLOduration=129.924634061 podStartE2EDuration="2m9.924634061s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.914742589 +0000 UTC m=+165.756165595" watchObservedRunningTime="2025-11-22 07:13:23.924634061 +0000 UTC m=+165.766057087" Nov 22 07:13:23 crc kubenswrapper[4858]: I1122 07:13:23.951770 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dzz75" podStartSLOduration=129.951750876 podStartE2EDuration="2m9.951750876s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.936619288 +0000 UTC m=+165.778042304" watchObservedRunningTime="2025-11-22 07:13:23.951750876 +0000 UTC m=+165.793173882" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.015008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.015357 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.515304649 +0000 UTC m=+166.356727655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.116236 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.116531 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.616481048 +0000 UTC m=+166.457904054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.116697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.117065 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.617044916 +0000 UTC m=+166.458467922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.218108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.218267 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.718237946 +0000 UTC m=+166.559660962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.218393 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.218734 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.718721091 +0000 UTC m=+166.560144097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.319252 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.319411 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.819393254 +0000 UTC m=+166.660816260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.319594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.319893 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.819882229 +0000 UTC m=+166.661305235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.421136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.421311 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.921289446 +0000 UTC m=+166.762712462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.421419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.421793 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.921783041 +0000 UTC m=+166.763206047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.522967 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.523135 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.023110636 +0000 UTC m=+166.864533642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.523241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.523578 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.0235617 +0000 UTC m=+166.864984706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.624942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.625238 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.125123511 +0000 UTC m=+166.966546517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.625440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.625817 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.125801802 +0000 UTC m=+166.967224808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.632523 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.633178 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.633231 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.726893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.727115 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.227091825 +0000 UTC m=+167.068514831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.727729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.728088 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.228074686 +0000 UTC m=+167.069497692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.829037 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.829197 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.329175413 +0000 UTC m=+167.170598429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.829395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.829918 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.329906446 +0000 UTC m=+167.171329452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.843652 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" event={"ID":"82c86211-6b1e-41e0-80b6-898aec0123a3","Type":"ContainerStarted","Data":"ed1b00d2971bab2cad1cbf9e688b8a81bbfdb2a16094b97ebce76ec67011a9d5"} Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.847235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" event={"ID":"593b796b-f4d8-4c80-b84f-38f74cfbd37b","Type":"ContainerStarted","Data":"12c04a5ce043ec03bf793a079e1a4ea58c6ae3a38c556fdf28a2b5af4692aa43"} Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.847938 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.849120 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-76fn7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.849629 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qbgwx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.849669 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.849984 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" podUID="d21a3b57-d4c4-474a-8be9-75c59a385d92" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.873340 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vb9gv" podStartSLOduration=130.873310984 podStartE2EDuration="2m10.873310984s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.955002768 +0000 UTC m=+165.796425774" watchObservedRunningTime="2025-11-22 07:13:24.873310984 +0000 UTC m=+166.714733990" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.883470 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" podStartSLOduration=130.883445423 podStartE2EDuration="2m10.883445423s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:24.882226636 +0000 UTC m=+166.723649642" watchObservedRunningTime="2025-11-22 07:13:24.883445423 +0000 UTC m=+166.724868429" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.898668 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" podStartSLOduration=130.898644563 podStartE2EDuration="2m10.898644563s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:24.895430062 +0000 UTC m=+166.736853088" watchObservedRunningTime="2025-11-22 07:13:24.898644563 +0000 UTC m=+166.740067569" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.912018 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" podStartSLOduration=130.912001874 podStartE2EDuration="2m10.912001874s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:24.909462544 +0000 UTC m=+166.750885550" watchObservedRunningTime="2025-11-22 07:13:24.912001874 +0000 UTC m=+166.753424880" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.924611 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" podStartSLOduration=130.9245983 podStartE2EDuration="2m10.9245983s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:24.923575299 +0000 UTC m=+166.764998305" watchObservedRunningTime="2025-11-22 07:13:24.9245983 +0000 UTC m=+166.766021306" Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.930500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.930743 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.430691443 +0000 UTC m=+167.272114449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.931573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:24 crc kubenswrapper[4858]: E1122 07:13:24.934065 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.434040388 +0000 UTC m=+167.275463454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4858]: I1122 07:13:24.963323 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jl4s" podStartSLOduration=130.963301661 podStartE2EDuration="2m10.963301661s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:24.958637823 +0000 UTC m=+166.800060839" watchObservedRunningTime="2025-11-22 07:13:24.963301661 +0000 UTC m=+166.804724667" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.033411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.033683 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.533651268 +0000 UTC m=+167.375074294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.033767 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.034109 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.534061351 +0000 UTC m=+167.375484357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.135531 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.135791 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.635760577 +0000 UTC m=+167.477183583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.135854 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.136179 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.63616571 +0000 UTC m=+167.477588716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.236541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.236956 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.736939066 +0000 UTC m=+167.578362072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.340235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.340717 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.840694397 +0000 UTC m=+167.682117403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.441508 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.941484934 +0000 UTC m=+167.782907960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.441840 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.442216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.442521 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.942508755 +0000 UTC m=+167.783931761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.544225 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.544450 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.044425018 +0000 UTC m=+167.885848024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.544662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.545029 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.045020067 +0000 UTC m=+167.886443073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.631846 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.631903 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.645994 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.646521 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.146500566 +0000 UTC m=+167.987923572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.747660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.748072 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.248053487 +0000 UTC m=+168.089476493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.848306 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.848572 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.348546744 +0000 UTC m=+168.189969750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.848962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.849286 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.349278178 +0000 UTC m=+168.190701184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.856516 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" event={"ID":"779f6711-3ae2-4fdf-b7a2-8755c5373eb3","Type":"ContainerStarted","Data":"54240fd2ba54881e836120abd808ef84a4a2948f021ff7204e322ee10fc019fe"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.858487 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" event={"ID":"f9a5d871-647f-4486-8bb9-14e65650c259","Type":"ContainerStarted","Data":"aa3e6ecd5c1587aaa544054d847e4513c6c47ce460c5235afe79a72e2295ded1"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.860885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xsjfg" event={"ID":"5555fba1-f639-4d9b-992b-203cc9a88938","Type":"ContainerStarted","Data":"2ac157d45c83445cd668560360bf22369a3facdc1c1ba13e66f3fefb964f50fa"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.862604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" event={"ID":"324dfa6f-8489-4776-b7bf-9bf6a489478b","Type":"ContainerStarted","Data":"066a687ec31b3869b77f06be6e85ed642c242544c9d2d2b5e06a8f9b895bd574"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.876141 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" event={"ID":"8c84d713-f4f9-4968-a086-95187d89c9c1","Type":"ContainerStarted","Data":"b0dbb8f7df4b043c4c2d5558604d666ffabe45e7bd362eb501ce30a7cc4757ea"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.878789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" event={"ID":"cec4f090-10a6-4206-b4a1-3876d40a0d4b","Type":"ContainerStarted","Data":"7232e412882a4607276d3e02da04bb85e5dc0ca23a8f6d3817e6bdd18c4c8479"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.881777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" event={"ID":"005c94b6-beb8-49e1-93e2-119bc01cd795","Type":"ContainerStarted","Data":"c3e5b997c8e6eab14d6762e1ce83d38be0b3bd5da68d0507a3ec4581380f4a0f"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.883700 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8ptsp" podStartSLOduration=131.883680732 podStartE2EDuration="2m11.883680732s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:24.984830019 +0000 UTC m=+166.826253025" watchObservedRunningTime="2025-11-22 07:13:25.883680732 +0000 UTC m=+167.725103738" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.892276 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ndcl9" podStartSLOduration=131.892249162 podStartE2EDuration="2m11.892249162s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:25.883222468 +0000 UTC m=+167.724645474" watchObservedRunningTime="2025-11-22 07:13:25.892249162 +0000 UTC m=+167.733672178" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.904320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" event={"ID":"556a50cc-cb65-4fb5-bee5-a88cbfd40341","Type":"ContainerStarted","Data":"7dffb81a2ef8d1cd667ec8bf28b0b245d7ae62dc742ed21e4ddade559beac687"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.910323 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r79gp" podStartSLOduration=131.910301911 podStartE2EDuration="2m11.910301911s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:25.910208148 +0000 UTC m=+167.751631164" watchObservedRunningTime="2025-11-22 07:13:25.910301911 +0000 UTC m=+167.751724917" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.911425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" event={"ID":"8d6bdadf-9903-44ef-b008-4f3864e83bb4","Type":"ContainerStarted","Data":"acf658891da0a726419dda4edf5280ee81aeab2a70242db35aaa6b65cfc75a49"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.915626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" event={"ID":"c93abfad-9450-43d5-824c-7ff52c2a613b","Type":"ContainerStarted","Data":"9700c8c72418806ddd1cc5e958c573c765eb5c1cfa0deace3c3be941b5537994"} Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.915655 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.916453 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qbgwx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.916518 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.916771 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.944032 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" podStartSLOduration=131.944018034 podStartE2EDuration="2m11.944018034s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:25.943649902 +0000 UTC m=+167.785072928" watchObservedRunningTime="2025-11-22 07:13:25.944018034 +0000 UTC m=+167.785441040" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.949977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4858]: E1122 07:13:25.951120 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.451102978 +0000 UTC m=+168.292525984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.993583 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" podStartSLOduration=131.993563436 podStartE2EDuration="2m11.993563436s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:25.992940586 +0000 UTC m=+167.834363592" watchObservedRunningTime="2025-11-22 07:13:25.993563436 +0000 UTC m=+167.834986442" Nov 22 07:13:25 crc kubenswrapper[4858]: I1122 07:13:25.994409 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnfrt" podStartSLOduration=131.994402512 podStartE2EDuration="2m11.994402512s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:25.968118874 +0000 UTC m=+167.809541880" watchObservedRunningTime="2025-11-22 07:13:25.994402512 +0000 UTC m=+167.835825518" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.046239 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-znzs2" podStartSLOduration=132.046219886 podStartE2EDuration="2m12.046219886s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.031475821 +0000 UTC m=+167.872898827" watchObservedRunningTime="2025-11-22 07:13:26.046219886 +0000 UTC m=+167.887642892" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.047449 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k74tq" podStartSLOduration=132.047442214 podStartE2EDuration="2m12.047442214s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.045200423 +0000 UTC m=+167.886623429" watchObservedRunningTime="2025-11-22 07:13:26.047442214 +0000 UTC m=+167.888865220" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.051343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.052989 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.552973968 +0000 UTC m=+168.394396974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.069281 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" podStartSLOduration=131.069266682 podStartE2EDuration="2m11.069266682s" podCreationTimestamp="2025-11-22 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.067917769 +0000 UTC m=+167.909340775" watchObservedRunningTime="2025-11-22 07:13:26.069266682 +0000 UTC m=+167.910689688" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.152877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.153076 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.653046763 +0000 UTC m=+168.494469779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.153562 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.153933 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.65392258 +0000 UTC m=+168.495345586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.254638 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.255012 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.754986766 +0000 UTC m=+168.596409772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.255128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.255478 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.755466271 +0000 UTC m=+168.596889277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.356273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.356478 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.856445804 +0000 UTC m=+168.697868820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.356539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.356813 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.856801886 +0000 UTC m=+168.698224892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.457998 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.458402 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.958376058 +0000 UTC m=+168.799799064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.560015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.560273 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.060261758 +0000 UTC m=+168.901684765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.649837 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:26 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:26 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:26 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.649932 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.661259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.661870 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.161842281 +0000 UTC m=+169.003265287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.762580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.762960 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.262945288 +0000 UTC m=+169.104368294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.864082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.864232 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.36420831 +0000 UTC m=+169.205631316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.864425 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.864755 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.364739336 +0000 UTC m=+169.206162342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.956833 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xsjfg" podStartSLOduration=10.956810179 podStartE2EDuration="10.956810179s" podCreationTimestamp="2025-11-22 07:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.955976182 +0000 UTC m=+168.797399188" watchObservedRunningTime="2025-11-22 07:13:26.956810179 +0000 UTC m=+168.798233185" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.959238 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fd8qn" podStartSLOduration=132.959221855 podStartE2EDuration="2m12.959221855s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.940193044 +0000 UTC m=+168.781616070" watchObservedRunningTime="2025-11-22 07:13:26.959221855 +0000 UTC m=+168.800644861" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.965560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.965717 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.465692608 +0000 UTC m=+169.307115614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.965812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:26 crc kubenswrapper[4858]: E1122 07:13:26.966135 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.466122282 +0000 UTC m=+169.307545288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.973076 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" podStartSLOduration=132.973059031 podStartE2EDuration="2m12.973059031s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.969681754 +0000 UTC m=+168.811104770" watchObservedRunningTime="2025-11-22 07:13:26.973059031 +0000 UTC m=+168.814482037" Nov 22 07:13:26 crc kubenswrapper[4858]: I1122 07:13:26.992999 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6bl5t" podStartSLOduration=132.992985889 podStartE2EDuration="2m12.992985889s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:26.992016068 +0000 UTC m=+168.833439074" watchObservedRunningTime="2025-11-22 07:13:26.992985889 +0000 UTC m=+168.834408895" Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.066777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.066985 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.56695453 +0000 UTC m=+169.408377546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.067558 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.069013 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.568997354 +0000 UTC m=+169.410420410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.169614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.170075 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.670044309 +0000 UTC m=+169.511467315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.271389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.271697 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.771684713 +0000 UTC m=+169.613107719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.372667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.373185 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.873138032 +0000 UTC m=+169.714561038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.474596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.475053 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.975040033 +0000 UTC m=+169.816463039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.576845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.577062 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.077034169 +0000 UTC m=+169.918457175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.577168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.577511 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.077504164 +0000 UTC m=+169.918927170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.638349 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:27 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:27 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:27 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.638710 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.678147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.678542 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.178516478 +0000 UTC m=+170.019939484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.779245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.779592 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.279576193 +0000 UTC m=+170.120999189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.880155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.880484 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.380469034 +0000 UTC m=+170.221892040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.923490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:13:27 crc kubenswrapper[4858]: I1122 07:13:27.981977 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:27 crc kubenswrapper[4858]: E1122 07:13:27.982369 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.482355084 +0000 UTC m=+170.323778100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.066077 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g8grc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.066434 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" podUID="593b796b-f4d8-4c80-b84f-38f74cfbd37b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.066094 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g8grc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.066664 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" podUID="593b796b-f4d8-4c80-b84f-38f74cfbd37b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.083167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.083374 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.583308207 +0000 UTC m=+170.424731213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.083703 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.083934 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.583926226 +0000 UTC m=+170.425349232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.185019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.185456 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.685431286 +0000 UTC m=+170.526854292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.286696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.287092 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.787077309 +0000 UTC m=+170.628500315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.387897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.388042 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.888024052 +0000 UTC m=+170.729447058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.388166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.388455 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.888446985 +0000 UTC m=+170.729869991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.456393 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.457287 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.460765 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.460929 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.469451 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.489409 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.489771 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.989749298 +0000 UTC m=+170.831172304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.590833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74c8ece5-4056-4a29-b133-9cb189b79c01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.590876 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74c8ece5-4056-4a29-b133-9cb189b79c01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.590905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.591259 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.091241278 +0000 UTC m=+170.932664284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.633272 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:28 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:28 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:28 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.633371 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.692284 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.692440 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.192416607 +0000 UTC m=+171.033839613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.692521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74c8ece5-4056-4a29-b133-9cb189b79c01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.692550 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74c8ece5-4056-4a29-b133-9cb189b79c01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.692571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.692607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74c8ece5-4056-4a29-b133-9cb189b79c01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.692856 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.19284633 +0000 UTC m=+171.034269336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.713623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74c8ece5-4056-4a29-b133-9cb189b79c01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.780838 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.793170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.793358 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.293336828 +0000 UTC m=+171.134759854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.793462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.793766 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.293757011 +0000 UTC m=+171.135180017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.895925 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.896379 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.396359485 +0000 UTC m=+171.237782491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.918088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.918125 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.920190 4858 patch_prober.go:28] interesting pod/console-f9d7485db-gtcln container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.920240 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gtcln" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.938385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" event={"ID":"53f7564e-935e-41b1-bf5a-58d1d509a014","Type":"ContainerStarted","Data":"8626fb422957c68c51eb87ce2512ba24ea02613cab6f04b570e6011fea5cbe63"} Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.939815 4858 generic.go:334] "Generic (PLEG): container finished" podID="7cb9f615-dc32-4f01-884b-db24dfb05c34" containerID="8414a7ddb1a889f4c7b1768708a340efe0869c69e66f4e76dbef7f463c63e033" exitCode=0 Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.939884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" event={"ID":"7cb9f615-dc32-4f01-884b-db24dfb05c34","Type":"ContainerDied","Data":"8414a7ddb1a889f4c7b1768708a340efe0869c69e66f4e76dbef7f463c63e033"} Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.945483 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.954953 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2scn9" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.955886 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.955929 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.957220 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.957261 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:28 crc kubenswrapper[4858]: I1122 07:13:28.997229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:28 crc kubenswrapper[4858]: E1122 07:13:28.997586 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.497573776 +0000 UTC m=+171.338996782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.000760 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.002063 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.011280 4858 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rsm26 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.011376 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" podUID="005c94b6-beb8-49e1-93e2-119bc01cd795" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.025024 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.026703 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.047749 4858 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-njg8w container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.047802 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" podUID="82c86211-6b1e-41e0-80b6-898aec0123a3" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.100635 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.101581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.102742 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.60272681 +0000 UTC m=+171.444149816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.112371 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.119347 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.197714 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.203154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.206832 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.706814121 +0000 UTC m=+171.548237127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.267696 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.281080 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.305944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.306062 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.806042328 +0000 UTC m=+171.647465334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.306391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.306649 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.806641707 +0000 UTC m=+171.648064713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.407138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.408434 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.908411686 +0000 UTC m=+171.749834692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.508836 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.509220 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.009205243 +0000 UTC m=+171.850628249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.611017 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.611510 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.111478666 +0000 UTC m=+171.952901682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.630718 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.638676 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:29 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:29 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:29 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.638758 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.712896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.713383 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.213363088 +0000 UTC m=+172.054786094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.814175 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.814405 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.314376572 +0000 UTC m=+172.155799578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.814547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.814897 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.314886008 +0000 UTC m=+172.156309014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.915492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4858]: E1122 07:13:29.915842 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.41582403 +0000 UTC m=+172.257247036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.945286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"74c8ece5-4056-4a29-b133-9cb189b79c01","Type":"ContainerStarted","Data":"2385ca7ac2e0aab2d8ebfa70e56864e2a267b92698d5f61bc55aff91e5e346a4"} Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.963574 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gg4fx"] Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.964580 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.967906 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.974111 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:29 crc kubenswrapper[4858]: I1122 07:13:29.993745 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gg4fx"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.016901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.017028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-catalog-content\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.017074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-utilities\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.017106 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrcj\" (UniqueName: \"kubernetes.io/projected/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-kube-api-access-7wrcj\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.019727 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76fn7" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.019995 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.519977122 +0000 UTC m=+172.361400198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.035977 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klt75" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.065779 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.076537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nzt8k" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.094548 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.118438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.118718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-catalog-content\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.118765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-utilities\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.118785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrcj\" (UniqueName: \"kubernetes.io/projected/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-kube-api-access-7wrcj\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.120349 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.620311946 +0000 UTC m=+172.461734952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.121014 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-catalog-content\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.121420 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-utilities\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.127565 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpkvd"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.128570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.130306 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.159495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrcj\" (UniqueName: \"kubernetes.io/projected/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-kube-api-access-7wrcj\") pod \"certified-operators-gg4fx\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.168464 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpkvd"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.219771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-utilities\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.219821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-catalog-content\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.219867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.219915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b778r\" (UniqueName: \"kubernetes.io/projected/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-kube-api-access-b778r\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.220750 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.720734671 +0000 UTC m=+172.562157677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.289953 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.336482 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.336709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b778r\" (UniqueName: \"kubernetes.io/projected/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-kube-api-access-b778r\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.336817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-utilities\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.336848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-catalog-content\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.337143 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qwxv2"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.337432 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-catalog-content\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.337447 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.837426949 +0000 UTC m=+172.678849955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.337734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-utilities\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.337991 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.351552 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qwxv2"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.391815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b778r\" (UniqueName: \"kubernetes.io/projected/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-kube-api-access-b778r\") pod \"community-operators-bpkvd\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.442653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.452155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.452238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-catalog-content\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.452285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8h6v\" (UniqueName: \"kubernetes.io/projected/406d65df-4a13-40bf-93c3-48a06797a79b-kube-api-access-h8h6v\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.452350 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-utilities\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.452692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.952677152 +0000 UTC m=+172.794100168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.523132 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gm4fm"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.526946 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.556544 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gm4fm"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.557433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.558061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-catalog-content\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.558236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8h6v\" (UniqueName: \"kubernetes.io/projected/406d65df-4a13-40bf-93c3-48a06797a79b-kube-api-access-h8h6v\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.558334 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.058283631 +0000 UTC m=+172.899706647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.558448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-utilities\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.561957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-catalog-content\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.579340 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-utilities\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.612994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8h6v\" (UniqueName: \"kubernetes.io/projected/406d65df-4a13-40bf-93c3-48a06797a79b-kube-api-access-h8h6v\") pod \"certified-operators-qwxv2\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.657829 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:30 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:30 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:30 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.657909 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.662599 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-catalog-content\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.662654 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-utilities\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.662906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.663049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsc28\" (UniqueName: \"kubernetes.io/projected/96d66887-e193-43e1-94c9-932220bee7a2-kube-api-access-hsc28\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.663305 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.163285151 +0000 UTC m=+173.004708157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.683250 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.701487 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.767909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.767990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb9f615-dc32-4f01-884b-db24dfb05c34-config-volume\") pod \"7cb9f615-dc32-4f01-884b-db24dfb05c34\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.768053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc5zk\" (UniqueName: \"kubernetes.io/projected/7cb9f615-dc32-4f01-884b-db24dfb05c34-kube-api-access-wc5zk\") pod \"7cb9f615-dc32-4f01-884b-db24dfb05c34\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.768123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb9f615-dc32-4f01-884b-db24dfb05c34-secret-volume\") pod \"7cb9f615-dc32-4f01-884b-db24dfb05c34\" (UID: \"7cb9f615-dc32-4f01-884b-db24dfb05c34\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.768286 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsc28\" (UniqueName: \"kubernetes.io/projected/96d66887-e193-43e1-94c9-932220bee7a2-kube-api-access-hsc28\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.768347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-catalog-content\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.768394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-utilities\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.770999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-utilities\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.771338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-catalog-content\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.771432 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.271410179 +0000 UTC m=+173.112833175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.772026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb9f615-dc32-4f01-884b-db24dfb05c34-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cb9f615-dc32-4f01-884b-db24dfb05c34" (UID: "7cb9f615-dc32-4f01-884b-db24dfb05c34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.783009 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb9f615-dc32-4f01-884b-db24dfb05c34-kube-api-access-wc5zk" (OuterVolumeSpecName: "kube-api-access-wc5zk") pod "7cb9f615-dc32-4f01-884b-db24dfb05c34" (UID: "7cb9f615-dc32-4f01-884b-db24dfb05c34"). InnerVolumeSpecName "kube-api-access-wc5zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.786259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb9f615-dc32-4f01-884b-db24dfb05c34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cb9f615-dc32-4f01-884b-db24dfb05c34" (UID: "7cb9f615-dc32-4f01-884b-db24dfb05c34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.796680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsc28\" (UniqueName: \"kubernetes.io/projected/96d66887-e193-43e1-94c9-932220bee7a2-kube-api-access-hsc28\") pod \"community-operators-gm4fm\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.857303 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gg4fx"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.876140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.876193 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb9f615-dc32-4f01-884b-db24dfb05c34-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.876205 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc5zk\" (UniqueName: \"kubernetes.io/projected/7cb9f615-dc32-4f01-884b-db24dfb05c34-kube-api-access-wc5zk\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.876216 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cb9f615-dc32-4f01-884b-db24dfb05c34-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.876481 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.3764525 +0000 UTC m=+173.217875496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4858]: W1122 07:13:30.883522 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b566c9a_c894_4c64_8c08_9b4ff2f9d064.slice/crio-7cd75c35540135e95e062e3be0dadf77e160d05239e47e550b96d30ddc88a979 WatchSource:0}: Error finding container 7cd75c35540135e95e062e3be0dadf77e160d05239e47e550b96d30ddc88a979: Status 404 returned error can't find the container with id 7cd75c35540135e95e062e3be0dadf77e160d05239e47e550b96d30ddc88a979 Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.896957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.951921 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpkvd"] Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.956959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerStarted","Data":"7cd75c35540135e95e062e3be0dadf77e160d05239e47e550b96d30ddc88a979"} Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.958560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" event={"ID":"7cb9f615-dc32-4f01-884b-db24dfb05c34","Type":"ContainerDied","Data":"8a76609ffb1d1883eca762b94b3b2d7f8a1ea0a3a9686d337f53b709224c2f27"} Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.958585 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a76609ffb1d1883eca762b94b3b2d7f8a1ea0a3a9686d337f53b709224c2f27" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.958633 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h" Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.961643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"74c8ece5-4056-4a29-b133-9cb189b79c01","Type":"ContainerStarted","Data":"0c1af842c78161fb5f8f1c6173851e15950fbe572323c42912b8fe8f6f2fbaa4"} Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.978132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4858]: I1122 07:13:30.986807 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.9867849079999997 podStartE2EDuration="2.986784908s" podCreationTimestamp="2025-11-22 07:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:30.982942906 +0000 UTC m=+172.824365922" watchObservedRunningTime="2025-11-22 07:13:30.986784908 +0000 UTC m=+172.828207914" Nov 22 07:13:30 crc kubenswrapper[4858]: E1122 07:13:30.988003 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.478514557 +0000 UTC m=+173.319937563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.068400 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8grc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.081249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.083032 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.583013161 +0000 UTC m=+173.424436167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.183542 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.185160 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.6851408 +0000 UTC m=+173.526563816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.286128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.286496 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.786482245 +0000 UTC m=+173.627905261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.372728 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qwxv2"] Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.387547 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.389122 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.889050877 +0000 UTC m=+173.730473883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.493154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.493573 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.993557302 +0000 UTC m=+173.834980308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.504249 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gm4fm"] Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.516692 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.517004 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb9f615-dc32-4f01-884b-db24dfb05c34" containerName="collect-profiles" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.517024 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb9f615-dc32-4f01-884b-db24dfb05c34" containerName="collect-profiles" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.517149 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb9f615-dc32-4f01-884b-db24dfb05c34" containerName="collect-profiles" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.517573 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.519468 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.519732 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.527100 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.594494 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.598981 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.098942573 +0000 UTC m=+173.940365579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.639537 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:31 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:31 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:31 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.639604 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.696914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.697026 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6e168c-0cd0-4d30-a99c-21b18714df19-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.697054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b6e168c-0cd0-4d30-a99c-21b18714df19-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.697421 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.197403627 +0000 UTC m=+174.038826643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.800424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.800690 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.300642531 +0000 UTC m=+174.142065537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.801224 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.801308 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6e168c-0cd0-4d30-a99c-21b18714df19-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.801369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b6e168c-0cd0-4d30-a99c-21b18714df19-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.801469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b6e168c-0cd0-4d30-a99c-21b18714df19-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.801822 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.301808788 +0000 UTC m=+174.143231794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.825025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6e168c-0cd0-4d30-a99c-21b18714df19-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.902867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.902952 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.402931896 +0000 UTC m=+174.244354902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.903089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:31 crc kubenswrapper[4858]: E1122 07:13:31.903413 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.403402071 +0000 UTC m=+174.244825087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.914673 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gllgw"] Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.915890 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.917856 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.932951 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllgw"] Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.948651 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.968666 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerID="ecea4307f448d9abf261c8ac0346ae435a78568a4b11b159247c8a0d0af25576" exitCode=0 Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.969118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerDied","Data":"ecea4307f448d9abf261c8ac0346ae435a78568a4b11b159247c8a0d0af25576"} Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.969937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerStarted","Data":"1e6554b066d9b6bb33ab670875fdfae360a4590968a0efe7c6afab61d1f34ac5"} Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.971353 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.971938 4858 generic.go:334] "Generic (PLEG): container finished" podID="74c8ece5-4056-4a29-b133-9cb189b79c01" containerID="0c1af842c78161fb5f8f1c6173851e15950fbe572323c42912b8fe8f6f2fbaa4" exitCode=0 Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.972245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"74c8ece5-4056-4a29-b133-9cb189b79c01","Type":"ContainerDied","Data":"0c1af842c78161fb5f8f1c6173851e15950fbe572323c42912b8fe8f6f2fbaa4"} Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.986978 4858 generic.go:334] "Generic (PLEG): container finished" podID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerID="14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c" exitCode=0 Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.987869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerDied","Data":"14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c"} Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.987900 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerStarted","Data":"1e27113cf9aad6b6d4671eb3273032ab4d7111465101e6a20b190dda677d1bfe"} Nov 22 07:13:31 crc kubenswrapper[4858]: I1122 07:13:31.993293 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerStarted","Data":"491a0b6165c7c2a598d54cb17cf49073173b5b1dc4343e894ab432b23e889203"} Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.004140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.004226 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.504208999 +0000 UTC m=+174.345632005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.004372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-utilities\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.004419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nswl\" (UniqueName: \"kubernetes.io/projected/a2126ba6-5874-4d63-98e6-1425898e8271-kube-api-access-9nswl\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.004466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.004509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-catalog-content\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.005632 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.505622093 +0000 UTC m=+174.347045099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.105241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.105458 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.605429439 +0000 UTC m=+174.446852455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.105790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nswl\" (UniqueName: \"kubernetes.io/projected/a2126ba6-5874-4d63-98e6-1425898e8271-kube-api-access-9nswl\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.105835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.105878 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-catalog-content\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.105931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-utilities\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.106672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-utilities\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.107271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-catalog-content\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.108178 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.608164355 +0000 UTC m=+174.449587431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.113099 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xsjfg" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.132410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nswl\" (UniqueName: \"kubernetes.io/projected/a2126ba6-5874-4d63-98e6-1425898e8271-kube-api-access-9nswl\") pod \"redhat-marketplace-gllgw\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.155856 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:13:32 crc kubenswrapper[4858]: W1122 07:13:32.164950 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3b6e168c_0cd0_4d30_a99c_21b18714df19.slice/crio-4b4e22ea724cf957d78f3b69a2d88e73133ff51f3e9fe22f66430df9cbb9f43b WatchSource:0}: Error finding container 4b4e22ea724cf957d78f3b69a2d88e73133ff51f3e9fe22f66430df9cbb9f43b: Status 404 returned error can't find the container with id 4b4e22ea724cf957d78f3b69a2d88e73133ff51f3e9fe22f66430df9cbb9f43b Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.206733 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.206926 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.706901227 +0000 UTC m=+174.548324233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.207112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.207523 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.707506957 +0000 UTC m=+174.548929963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.229357 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.303935 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fhllm"] Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.305349 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.308375 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.311457 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.811429962 +0000 UTC m=+174.652853088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.320796 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhllm"] Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.410977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-catalog-content\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.411642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8dg\" (UniqueName: \"kubernetes.io/projected/6ca5c366-b319-4f1d-a936-e070dd85d876-kube-api-access-qw8dg\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.411742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-utilities\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.411795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.412293 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.912276451 +0000 UTC m=+174.753699447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.476974 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllgw"] Nov 22 07:13:32 crc kubenswrapper[4858]: W1122 07:13:32.494898 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2126ba6_5874_4d63_98e6_1425898e8271.slice/crio-ed61fdc636e30f272e48dc0b1d5318409d989cc5e48ead5252490cfa3a434cac WatchSource:0}: Error finding container ed61fdc636e30f272e48dc0b1d5318409d989cc5e48ead5252490cfa3a434cac: Status 404 returned error can't find the container with id ed61fdc636e30f272e48dc0b1d5318409d989cc5e48ead5252490cfa3a434cac Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.514188 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.514633 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.014481812 +0000 UTC m=+174.855904818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.514706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-catalog-content\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.514787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8dg\" (UniqueName: \"kubernetes.io/projected/6ca5c366-b319-4f1d-a936-e070dd85d876-kube-api-access-qw8dg\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.514868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-utilities\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.514899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.515353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-catalog-content\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.515391 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.015376481 +0000 UTC m=+174.856799487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.515434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-utilities\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.537312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8dg\" (UniqueName: \"kubernetes.io/projected/6ca5c366-b319-4f1d-a936-e070dd85d876-kube-api-access-qw8dg\") pod \"redhat-marketplace-fhllm\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.616267 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.616685 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.116640323 +0000 UTC m=+174.958063329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.633382 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:32 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:32 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:32 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.633464 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.648517 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.717495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.717882 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.217867454 +0000 UTC m=+175.059290460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.819138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.819271 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.319248759 +0000 UTC m=+175.160671765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.819544 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.819966 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.319951041 +0000 UTC m=+175.161374047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: W1122 07:13:32.875041 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ca5c366_b319_4f1d_a936_e070dd85d876.slice/crio-cb650d9287d9a3aa4c7d6471a39789c0370be689fdfebca357bd07d5fdd0bec7 WatchSource:0}: Error finding container cb650d9287d9a3aa4c7d6471a39789c0370be689fdfebca357bd07d5fdd0bec7: Status 404 returned error can't find the container with id cb650d9287d9a3aa4c7d6471a39789c0370be689fdfebca357bd07d5fdd0bec7 Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.876763 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhllm"] Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.920458 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.920654 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.420626935 +0000 UTC m=+175.262049951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4858]: I1122 07:13:32.920891 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:32 crc kubenswrapper[4858]: E1122 07:13:32.921171 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.421160042 +0000 UTC m=+175.262583048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:32.999975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerStarted","Data":"ed61fdc636e30f272e48dc0b1d5318409d989cc5e48ead5252490cfa3a434cac"} Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.001615 4858 generic.go:334] "Generic (PLEG): container finished" podID="406d65df-4a13-40bf-93c3-48a06797a79b" containerID="ae45a93d9af9c15bf11c0b5aca40ddc6f588cd7afde36598af092e1549ab52e3" exitCode=0 Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.001669 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerDied","Data":"ae45a93d9af9c15bf11c0b5aca40ddc6f588cd7afde36598af092e1549ab52e3"} Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.002775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerStarted","Data":"cb650d9287d9a3aa4c7d6471a39789c0370be689fdfebca357bd07d5fdd0bec7"} Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.003713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3b6e168c-0cd0-4d30-a99c-21b18714df19","Type":"ContainerStarted","Data":"4b4e22ea724cf957d78f3b69a2d88e73133ff51f3e9fe22f66430df9cbb9f43b"} Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.004890 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerStarted","Data":"850fe739c6dde3b7f82e38d33b2865214ebcbb70e26513f03cf8f329daf442ab"} Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.022297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.022488 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.522468175 +0000 UTC m=+175.363891171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.023648 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.023979 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.523970442 +0000 UTC m=+175.365393448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.124766 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.124980 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.624951015 +0000 UTC m=+175.466374021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.125207 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.125548 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.625533714 +0000 UTC m=+175.466956720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.179167 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.225938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74c8ece5-4056-4a29-b133-9cb189b79c01-kube-api-access\") pod \"74c8ece5-4056-4a29-b133-9cb189b79c01\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.226126 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.226216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74c8ece5-4056-4a29-b133-9cb189b79c01-kubelet-dir\") pod \"74c8ece5-4056-4a29-b133-9cb189b79c01\" (UID: \"74c8ece5-4056-4a29-b133-9cb189b79c01\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.226302 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.726275879 +0000 UTC m=+175.567698885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.226359 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74c8ece5-4056-4a29-b133-9cb189b79c01-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "74c8ece5-4056-4a29-b133-9cb189b79c01" (UID: "74c8ece5-4056-4a29-b133-9cb189b79c01"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.226386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.226489 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74c8ece5-4056-4a29-b133-9cb189b79c01-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.226650 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.726642341 +0000 UTC m=+175.568065347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.231287 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c8ece5-4056-4a29-b133-9cb189b79c01-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "74c8ece5-4056-4a29-b133-9cb189b79c01" (UID: "74c8ece5-4056-4a29-b133-9cb189b79c01"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.298183 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knj5m"] Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.298429 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c8ece5-4056-4a29-b133-9cb189b79c01" containerName="pruner" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.298464 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c8ece5-4056-4a29-b133-9cb189b79c01" containerName="pruner" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.298563 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c8ece5-4056-4a29-b133-9cb189b79c01" containerName="pruner" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.299315 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.301269 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.307759 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knj5m"] Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.327701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.327955 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.827896683 +0000 UTC m=+175.669319689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.328116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-utilities\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.328213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.328637 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.828625786 +0000 UTC m=+175.670048802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.328952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plx6g\" (UniqueName: \"kubernetes.io/projected/a59d35e7-c68d-4908-aa12-e587cf1a65ea-kube-api-access-plx6g\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.329025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-catalog-content\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.329105 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74c8ece5-4056-4a29-b133-9cb189b79c01-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.430136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.430401 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.930372382 +0000 UTC m=+175.771795398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.430486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plx6g\" (UniqueName: \"kubernetes.io/projected/a59d35e7-c68d-4908-aa12-e587cf1a65ea-kube-api-access-plx6g\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.430586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-catalog-content\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.430700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-utilities\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.430738 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.431040 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:33.931025243 +0000 UTC m=+175.772448249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.431059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-catalog-content\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.431143 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-utilities\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.451082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plx6g\" (UniqueName: \"kubernetes.io/projected/a59d35e7-c68d-4908-aa12-e587cf1a65ea-kube-api-access-plx6g\") pod \"redhat-operators-knj5m\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.531844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.532039 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.032013916 +0000 UTC m=+175.873436922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.532617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.532993 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.032965107 +0000 UTC m=+175.874388103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.622210 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.633480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.633787 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.133770234 +0000 UTC m=+175.975193240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.635859 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:33 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:33 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:33 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.635912 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.709115 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5jfs"] Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.710279 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.711716 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5jfs"] Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.735729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-utilities\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.735805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.735928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9jf\" (UniqueName: \"kubernetes.io/projected/d9338eed-325b-4c13-bb27-758011490a06-kube-api-access-jp9jf\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.736006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-catalog-content\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.736242 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.236217713 +0000 UTC m=+176.077640719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.809420 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knj5m"] Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.838904 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.839176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp9jf\" (UniqueName: \"kubernetes.io/projected/d9338eed-325b-4c13-bb27-758011490a06-kube-api-access-jp9jf\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.839207 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-catalog-content\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.839258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-utilities\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.839743 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-utilities\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.840065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-catalog-content\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.841448 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.339851109 +0000 UTC m=+176.181274105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.858077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp9jf\" (UniqueName: \"kubernetes.io/projected/d9338eed-325b-4c13-bb27-758011490a06-kube-api-access-jp9jf\") pod \"redhat-operators-t5jfs\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:33 crc kubenswrapper[4858]: I1122 07:13:33.940249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:33 crc kubenswrapper[4858]: E1122 07:13:33.940550 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.440538423 +0000 UTC m=+176.281961419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.000637 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.006163 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rsm26" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.010340 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.010334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"74c8ece5-4056-4a29-b133-9cb189b79c01","Type":"ContainerDied","Data":"2385ca7ac2e0aab2d8ebfa70e56864e2a267b92698d5f61bc55aff91e5e346a4"} Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.010456 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2385ca7ac2e0aab2d8ebfa70e56864e2a267b92698d5f61bc55aff91e5e346a4" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.011978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3b6e168c-0cd0-4d30-a99c-21b18714df19","Type":"ContainerStarted","Data":"2b21b5dd9a67d81e2a34b3d1b711ba6ae9e516f541f5bd2195a364b3605cb136"} Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.013275 4858 generic.go:334] "Generic (PLEG): container finished" podID="96d66887-e193-43e1-94c9-932220bee7a2" containerID="850fe739c6dde3b7f82e38d33b2865214ebcbb70e26513f03cf8f329daf442ab" exitCode=0 Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.013347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerDied","Data":"850fe739c6dde3b7f82e38d33b2865214ebcbb70e26513f03cf8f329daf442ab"} Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.015038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerStarted","Data":"5f122f42f892b6a3115c5677e4ec7d5a08661ef06a9ab2fbd4493d7c7eb582ba"} Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.028295 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.035834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.037887 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njg8w" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.041038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.042163 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.542136096 +0000 UTC m=+176.383559122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.145216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.145650 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.645634558 +0000 UTC m=+176.487057564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.246236 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.246584 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.74656671 +0000 UTC m=+176.587989716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.246700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.247080 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.747069285 +0000 UTC m=+176.588492291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.348048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.348217 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.848193032 +0000 UTC m=+176.689616028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.348723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.349100 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.849085941 +0000 UTC m=+176.690508957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.449705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.450083 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:34.950063074 +0000 UTC m=+176.791486090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.498623 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5jfs"] Nov 22 07:13:34 crc kubenswrapper[4858]: W1122 07:13:34.505625 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9338eed_325b_4c13_bb27_758011490a06.slice/crio-046e84ebaa4ae1a167c52aa3ed42721bc92825821938a0a6dc55cd1c3b4d4157 WatchSource:0}: Error finding container 046e84ebaa4ae1a167c52aa3ed42721bc92825821938a0a6dc55cd1c3b4d4157: Status 404 returned error can't find the container with id 046e84ebaa4ae1a167c52aa3ed42721bc92825821938a0a6dc55cd1c3b4d4157 Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.551863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.552189 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.052174512 +0000 UTC m=+176.893597518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.639819 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:34 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:34 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:34 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.640247 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.652657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.652808 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.152782784 +0000 UTC m=+176.994205790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.652961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.653241 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.153229108 +0000 UTC m=+176.994652114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.754247 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.754675 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.254645735 +0000 UTC m=+177.096068761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.857018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.857402 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.357386903 +0000 UTC m=+177.198809909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.958245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.958389 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.458372516 +0000 UTC m=+177.299795522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:34 crc kubenswrapper[4858]: I1122 07:13:34.958471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:34 crc kubenswrapper[4858]: E1122 07:13:34.958778 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.458770489 +0000 UTC m=+177.300193495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.020273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerStarted","Data":"0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48"} Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.022462 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerStarted","Data":"046e84ebaa4ae1a167c52aa3ed42721bc92825821938a0a6dc55cd1c3b4d4157"} Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.059663 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.059835 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.559814524 +0000 UTC m=+177.401237530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.060006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.060452 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.560441524 +0000 UTC m=+177.401864540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.161553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.161797 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.661764819 +0000 UTC m=+177.503187825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.161945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.162403 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.662380678 +0000 UTC m=+177.503803684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.262826 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.263182 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.763160755 +0000 UTC m=+177.604583761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.263266 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.263769 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.763746633 +0000 UTC m=+177.605169659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.364269 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.364432 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.864405826 +0000 UTC m=+177.705828842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.364580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.364891 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.864874981 +0000 UTC m=+177.706297987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.465891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.466034 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.966009809 +0000 UTC m=+177.807432815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.466471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.466830 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:35.966819954 +0000 UTC m=+177.808242960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.567739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.567913 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.06788801 +0000 UTC m=+177.909311016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.568142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.568493 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.068483858 +0000 UTC m=+177.909906864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.633380 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:35 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:35 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:35 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.633436 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.669268 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.669654 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.169616946 +0000 UTC m=+178.011039992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.770824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.771159 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.271144607 +0000 UTC m=+178.112567613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.872174 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.872378 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.372313786 +0000 UTC m=+178.213736792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.872513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.873028 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.373011018 +0000 UTC m=+178.214434024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.974020 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.974209 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.474184378 +0000 UTC m=+178.315607384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:35 crc kubenswrapper[4858]: I1122 07:13:35.974318 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:35 crc kubenswrapper[4858]: E1122 07:13:35.974678 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.474668302 +0000 UTC m=+178.316091378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.028603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerStarted","Data":"0e9d75561acc3cbb460591fb356a6d7c8dde00cbfcfd7c0134b68bdc5b7cb454"} Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.029860 4858 generic.go:334] "Generic (PLEG): container finished" podID="a2126ba6-5874-4d63-98e6-1425898e8271" containerID="0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48" exitCode=0 Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.029886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerDied","Data":"0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48"} Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.075026 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.075158 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.575134689 +0000 UTC m=+178.416557695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.075235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.075561 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.575553762 +0000 UTC m=+178.416976768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.176267 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.176622 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.676596027 +0000 UTC m=+178.518019033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.177093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.177487 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.677469875 +0000 UTC m=+178.518892891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.277738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.278011 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.777982493 +0000 UTC m=+178.619405509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.278093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.278436 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.778423517 +0000 UTC m=+178.619846523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.379215 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.379392 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.879360379 +0000 UTC m=+178.720783395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.379466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.379783 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.879771782 +0000 UTC m=+178.721194788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.480299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.480490 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.980459326 +0000 UTC m=+178.821882332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.480667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.480728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.481117 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:36.981098195 +0000 UTC m=+178.822521221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.488954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/668a4495-5031-4084-9b05-d5d73dd20613-metrics-certs\") pod \"network-metrics-daemon-m2bfv\" (UID: \"668a4495-5031-4084-9b05-d5d73dd20613\") " pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.583118 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.583624 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.083594337 +0000 UTC m=+178.925017353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.632785 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:36 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:36 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:36 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.632844 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.684495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.684775 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.184761946 +0000 UTC m=+179.026184942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.764013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m2bfv" Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.785418 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.785809 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.285793241 +0000 UTC m=+179.127216247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.887649 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.888268 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.38825467 +0000 UTC m=+179.229677676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.989506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.989624 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.489609415 +0000 UTC m=+179.331032421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:36 crc kubenswrapper[4858]: I1122 07:13:36.989817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:36 crc kubenswrapper[4858]: E1122 07:13:36.990168 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.490157132 +0000 UTC m=+179.331580138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.040290 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerID="0e9d75561acc3cbb460591fb356a6d7c8dde00cbfcfd7c0134b68bdc5b7cb454" exitCode=0 Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.040354 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerDied","Data":"0e9d75561acc3cbb460591fb356a6d7c8dde00cbfcfd7c0134b68bdc5b7cb454"} Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.090989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.091122 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.591097184 +0000 UTC m=+179.432520190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.091435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.091841 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.591829897 +0000 UTC m=+179.433252973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.172985 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.172964184 podStartE2EDuration="6.172964184s" podCreationTimestamp="2025-11-22 07:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:37.057819844 +0000 UTC m=+178.899242850" watchObservedRunningTime="2025-11-22 07:13:37.172964184 +0000 UTC m=+179.014387190" Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.173296 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-m2bfv"] Nov 22 07:13:37 crc kubenswrapper[4858]: W1122 07:13:37.179777 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668a4495_5031_4084_9b05_d5d73dd20613.slice/crio-58b101320de8aaee8c95ce97ec0f441c8df18f4707b9b6553cda6c17074ffecd WatchSource:0}: Error finding container 58b101320de8aaee8c95ce97ec0f441c8df18f4707b9b6553cda6c17074ffecd: Status 404 returned error can't find the container with id 58b101320de8aaee8c95ce97ec0f441c8df18f4707b9b6553cda6c17074ffecd Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.192915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.193303 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.693286115 +0000 UTC m=+179.534709121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.294694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.294981 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.7949699 +0000 UTC m=+179.636392906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.396287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.396466 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.896435229 +0000 UTC m=+179.737858245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.397004 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.397413 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.897377658 +0000 UTC m=+179.738800664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.498432 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.498650 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.998604709 +0000 UTC m=+179.840027715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.499015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.499421 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:37.999411395 +0000 UTC m=+179.840834401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.600420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.600665 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.100627605 +0000 UTC m=+179.942050611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.601101 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.601389 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.101381388 +0000 UTC m=+179.942804394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.633106 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:37 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:37 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:37 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.633409 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.702830 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.703046 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.203013222 +0000 UTC m=+180.044436228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.703091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.703533 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.203525149 +0000 UTC m=+180.044948155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.804037 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.804692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.304658537 +0000 UTC m=+180.146081543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.812136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.813022 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.31300214 +0000 UTC m=+180.154425146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:37 crc kubenswrapper[4858]: I1122 07:13:37.913475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:37 crc kubenswrapper[4858]: E1122 07:13:37.914301 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.414283472 +0000 UTC m=+180.255706478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.015150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.015575 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.515559214 +0000 UTC m=+180.356982220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.047023 4858 generic.go:334] "Generic (PLEG): container finished" podID="d9338eed-325b-4c13-bb27-758011490a06" containerID="3414ce1a6af74fcdbd9d73b94001654e7b76c6e46487f0a7cec48672920219af" exitCode=0 Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.047095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerDied","Data":"3414ce1a6af74fcdbd9d73b94001654e7b76c6e46487f0a7cec48672920219af"} Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.048450 4858 generic.go:334] "Generic (PLEG): container finished" podID="3b6e168c-0cd0-4d30-a99c-21b18714df19" containerID="2b21b5dd9a67d81e2a34b3d1b711ba6ae9e516f541f5bd2195a364b3605cb136" exitCode=0 Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.048600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3b6e168c-0cd0-4d30-a99c-21b18714df19","Type":"ContainerDied","Data":"2b21b5dd9a67d81e2a34b3d1b711ba6ae9e516f541f5bd2195a364b3605cb136"} Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.055426 4858 generic.go:334] "Generic (PLEG): container finished" podID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerID="2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917" exitCode=0 Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.055489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerDied","Data":"2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917"} Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.058648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" event={"ID":"668a4495-5031-4084-9b05-d5d73dd20613","Type":"ContainerStarted","Data":"58b101320de8aaee8c95ce97ec0f441c8df18f4707b9b6553cda6c17074ffecd"} Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.116186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.116380 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.616353731 +0000 UTC m=+180.457776737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.116422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.116747 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.616739913 +0000 UTC m=+180.458162919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.220669 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.220791 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.720768242 +0000 UTC m=+180.562191268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.221599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.221944 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.721932499 +0000 UTC m=+180.563355505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.322237 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.322431 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.822405556 +0000 UTC m=+180.663828562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.322542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.322869 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.822861331 +0000 UTC m=+180.664284337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.423300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.423434 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.92341663 +0000 UTC m=+180.764839636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.423791 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.424122 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:38.924111932 +0000 UTC m=+180.765534938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.525929 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.526151 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.026109437 +0000 UTC m=+180.867532453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.526406 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.526809 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.026788859 +0000 UTC m=+180.868211865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.627420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.627722 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.127647868 +0000 UTC m=+180.969070884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.627824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.628191 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.128172225 +0000 UTC m=+180.969595231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.635068 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:38 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:38 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:38 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.635175 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.729692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.730009 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.229975503 +0000 UTC m=+181.071398519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.798421 4858 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.831820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.832261 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.332246247 +0000 UTC m=+181.173669253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vrgkv" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.917667 4858 patch_prober.go:28] interesting pod/console-f9d7485db-gtcln container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.917752 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gtcln" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.932914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:38 crc kubenswrapper[4858]: E1122 07:13:38.933338 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:39.433301013 +0000 UTC m=+181.274724029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.942296 4858 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-22T07:13:38.798459162Z","Handler":null,"Name":""} Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.956465 4858 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.956515 4858 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.956673 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.956824 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.956642 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:38 crc kubenswrapper[4858]: I1122 07:13:38.957510 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.035706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.039455 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.039518 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.080632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vrgkv\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.091701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" event={"ID":"53f7564e-935e-41b1-bf5a-58d1d509a014","Type":"ContainerStarted","Data":"156bf2debcc17a32c6c3ccd800b119674254817dec1eb03fcf4b75066ca4656d"} Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.091787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" event={"ID":"53f7564e-935e-41b1-bf5a-58d1d509a014","Type":"ContainerStarted","Data":"4557c3658429725b11f93ce964ebbd355071eb53577ccb8ef38c04ff9c20749c"} Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.114944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" event={"ID":"668a4495-5031-4084-9b05-d5d73dd20613","Type":"ContainerStarted","Data":"a4a014874223beae33500a34bc964311b41e4bec5ae4c4ac3bd3de360ecbf961"} Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.114999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-m2bfv" event={"ID":"668a4495-5031-4084-9b05-d5d73dd20613","Type":"ContainerStarted","Data":"87ea75fedbcb96b3b427b47fa02d34854f0c0d53095a0b841c12387dae52aa84"} Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.138272 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.153303 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.370738 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.482645 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.554070 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.555640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b6e168c-0cd0-4d30-a99c-21b18714df19-kubelet-dir\") pod \"3b6e168c-0cd0-4d30-a99c-21b18714df19\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.555805 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6e168c-0cd0-4d30-a99c-21b18714df19-kube-api-access\") pod \"3b6e168c-0cd0-4d30-a99c-21b18714df19\" (UID: \"3b6e168c-0cd0-4d30-a99c-21b18714df19\") " Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.555937 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b6e168c-0cd0-4d30-a99c-21b18714df19-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3b6e168c-0cd0-4d30-a99c-21b18714df19" (UID: "3b6e168c-0cd0-4d30-a99c-21b18714df19"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.556040 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b6e168c-0cd0-4d30-a99c-21b18714df19-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.565675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6e168c-0cd0-4d30-a99c-21b18714df19-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3b6e168c-0cd0-4d30-a99c-21b18714df19" (UID: "3b6e168c-0cd0-4d30-a99c-21b18714df19"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.641667 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:39 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:39 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:39 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.642171 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.657551 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6e168c-0cd0-4d30-a99c-21b18714df19-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:39 crc kubenswrapper[4858]: I1122 07:13:39.850565 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vrgkv"] Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.128158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" event={"ID":"53f7564e-935e-41b1-bf5a-58d1d509a014","Type":"ContainerStarted","Data":"b7466c28a14d9a221829964701bcb30cf54216020191228f18305db11eff9942"} Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.132149 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.132150 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3b6e168c-0cd0-4d30-a99c-21b18714df19","Type":"ContainerDied","Data":"4b4e22ea724cf957d78f3b69a2d88e73133ff51f3e9fe22f66430df9cbb9f43b"} Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.132199 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b4e22ea724cf957d78f3b69a2d88e73133ff51f3e9fe22f66430df9cbb9f43b" Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.134345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" event={"ID":"022ff96d-cffc-425d-8bce-d26d9ce573d3","Type":"ContainerStarted","Data":"87739e949dbd79aff91838d4222bc6a12eaa44a6293216281e6dc59bd17931fb"} Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.151994 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-55rnx" podStartSLOduration=24.151971216 podStartE2EDuration="24.151971216s" podCreationTimestamp="2025-11-22 07:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:40.150200351 +0000 UTC m=+181.991623387" watchObservedRunningTime="2025-11-22 07:13:40.151971216 +0000 UTC m=+181.993394222" Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.183207 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-m2bfv" podStartSLOduration=146.18316968 podStartE2EDuration="2m26.18316968s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:40.169259621 +0000 UTC m=+182.010682627" watchObservedRunningTime="2025-11-22 07:13:40.18316968 +0000 UTC m=+182.024592716" Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.633629 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:40 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Nov 22 07:13:40 crc kubenswrapper[4858]: [+]process-running ok Nov 22 07:13:40 crc kubenswrapper[4858]: healthz check failed Nov 22 07:13:40 crc kubenswrapper[4858]: I1122 07:13:40.633699 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:41 crc kubenswrapper[4858]: I1122 07:13:41.160215 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" event={"ID":"022ff96d-cffc-425d-8bce-d26d9ce573d3","Type":"ContainerStarted","Data":"dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f"} Nov 22 07:13:41 crc kubenswrapper[4858]: I1122 07:13:41.190555 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" podStartSLOduration=147.190535353 podStartE2EDuration="2m27.190535353s" podCreationTimestamp="2025-11-22 07:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:41.188050964 +0000 UTC m=+183.029473990" watchObservedRunningTime="2025-11-22 07:13:41.190535353 +0000 UTC m=+183.031958359" Nov 22 07:13:41 crc kubenswrapper[4858]: I1122 07:13:41.735621 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:41 crc kubenswrapper[4858]: I1122 07:13:41.739848 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xllks" Nov 22 07:13:42 crc kubenswrapper[4858]: I1122 07:13:42.165751 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:13:45 crc kubenswrapper[4858]: I1122 07:13:45.312099 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:13:45 crc kubenswrapper[4858]: I1122 07:13:45.312162 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.924994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.929364 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.956537 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.956634 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.956718 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.956554 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.956848 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.957126 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.957199 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.957724 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"5557b554071f07f0d5955467629f394decbe5255d450690818527409c0156ae5"} pod="openshift-console/downloads-7954f5f757-bgn27" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 22 07:13:48 crc kubenswrapper[4858]: I1122 07:13:48.957859 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" containerID="cri-o://5557b554071f07f0d5955467629f394decbe5255d450690818527409c0156ae5" gracePeriod=2 Nov 22 07:13:50 crc kubenswrapper[4858]: I1122 07:13:50.207662 4858 generic.go:334] "Generic (PLEG): container finished" podID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerID="5557b554071f07f0d5955467629f394decbe5255d450690818527409c0156ae5" exitCode=0 Nov 22 07:13:50 crc kubenswrapper[4858]: I1122 07:13:50.207740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bgn27" event={"ID":"e79f7ebf-0dac-4f86-b3f1-045904313fba","Type":"ContainerDied","Data":"5557b554071f07f0d5955467629f394decbe5255d450690818527409c0156ae5"} Nov 22 07:13:58 crc kubenswrapper[4858]: I1122 07:13:58.956670 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:13:58 crc kubenswrapper[4858]: I1122 07:13:58.957226 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:13:59 crc kubenswrapper[4858]: I1122 07:13:59.376386 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:14:00 crc kubenswrapper[4858]: I1122 07:14:00.095929 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ngf52" Nov 22 07:14:01 crc kubenswrapper[4858]: I1122 07:14:01.755693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:14:08 crc kubenswrapper[4858]: I1122 07:14:08.956766 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:14:08 crc kubenswrapper[4858]: I1122 07:14:08.957305 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:14:12 crc kubenswrapper[4858]: E1122 07:14:12.392431 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:14:12 crc kubenswrapper[4858]: E1122 07:14:12.392658 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nswl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gllgw_openshift-marketplace(a2126ba6-5874-4d63-98e6-1425898e8271): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:12 crc kubenswrapper[4858]: E1122 07:14:12.393854 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gllgw" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" Nov 22 07:14:15 crc kubenswrapper[4858]: E1122 07:14:15.159828 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:14:15 crc kubenswrapper[4858]: E1122 07:14:15.161261 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw8dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fhllm_openshift-marketplace(6ca5c366-b319-4f1d-a936-e070dd85d876): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:15 crc kubenswrapper[4858]: E1122 07:14:15.162542 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fhllm" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" Nov 22 07:14:15 crc kubenswrapper[4858]: E1122 07:14:15.191411 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 07:14:15 crc kubenswrapper[4858]: E1122 07:14:15.191584 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8h6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qwxv2_openshift-marketplace(406d65df-4a13-40bf-93c3-48a06797a79b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:15 crc kubenswrapper[4858]: E1122 07:14:15.192790 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qwxv2" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" Nov 22 07:14:15 crc kubenswrapper[4858]: I1122 07:14:15.312310 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:14:15 crc kubenswrapper[4858]: I1122 07:14:15.312400 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:14:15 crc kubenswrapper[4858]: I1122 07:14:15.312449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:14:15 crc kubenswrapper[4858]: I1122 07:14:15.313064 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:14:15 crc kubenswrapper[4858]: I1122 07:14:15.313129 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb" gracePeriod=600 Nov 22 07:14:18 crc kubenswrapper[4858]: I1122 07:14:18.339293 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb" exitCode=0 Nov 22 07:14:18 crc kubenswrapper[4858]: I1122 07:14:18.339403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb"} Nov 22 07:14:18 crc kubenswrapper[4858]: I1122 07:14:18.958033 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:14:18 crc kubenswrapper[4858]: I1122 07:14:18.958121 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:14:23 crc kubenswrapper[4858]: E1122 07:14:23.467651 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 07:14:23 crc kubenswrapper[4858]: E1122 07:14:23.467910 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wrcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gg4fx_openshift-marketplace(7b566c9a-c894-4c64-8c08-9b4ff2f9d064): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:23 crc kubenswrapper[4858]: E1122 07:14:23.469633 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gg4fx" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" Nov 22 07:14:28 crc kubenswrapper[4858]: I1122 07:14:28.956019 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:14:28 crc kubenswrapper[4858]: I1122 07:14:28.956654 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:14:38 crc kubenswrapper[4858]: I1122 07:14:38.956781 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:14:38 crc kubenswrapper[4858]: I1122 07:14:38.957310 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:14:43 crc kubenswrapper[4858]: E1122 07:14:43.795761 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 07:14:43 crc kubenswrapper[4858]: E1122 07:14:43.796604 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b778r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bpkvd_openshift-marketplace(35fabf16-e20c-44d3-aa61-d3e9b881ab4e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:43 crc kubenswrapper[4858]: E1122 07:14:43.797829 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bpkvd" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" Nov 22 07:14:48 crc kubenswrapper[4858]: I1122 07:14:48.956647 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:14:48 crc kubenswrapper[4858]: I1122 07:14:48.956974 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.091897 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.092078 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gm4fm_openshift-marketplace(96d66887-e193-43e1-94c9-932220bee7a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.093296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gm4fm" podUID="96d66887-e193-43e1-94c9-932220bee7a2" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.512649 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bpkvd" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.528074 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.528402 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw8dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fhllm_openshift-marketplace(6ca5c366-b319-4f1d-a936-e070dd85d876): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.529715 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fhllm" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.530804 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.530923 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plx6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-knj5m_openshift-marketplace(a59d35e7-c68d-4908-aa12-e587cf1a65ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:52 crc kubenswrapper[4858]: E1122 07:14:52.532075 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-knj5m" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" Nov 22 07:14:58 crc kubenswrapper[4858]: I1122 07:14:58.956182 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:14:58 crc kubenswrapper[4858]: I1122 07:14:58.956648 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.136596 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42"] Nov 22 07:15:00 crc kubenswrapper[4858]: E1122 07:15:00.136882 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6e168c-0cd0-4d30-a99c-21b18714df19" containerName="pruner" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.136899 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6e168c-0cd0-4d30-a99c-21b18714df19" containerName="pruner" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.137031 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6e168c-0cd0-4d30-a99c-21b18714df19" containerName="pruner" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.137449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.146665 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.147477 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.149564 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42"] Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.250178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkx9f\" (UniqueName: \"kubernetes.io/projected/50a5eb82-d541-4f36-bed3-dda09042ee97-kube-api-access-nkx9f\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.250356 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50a5eb82-d541-4f36-bed3-dda09042ee97-config-volume\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.250392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50a5eb82-d541-4f36-bed3-dda09042ee97-secret-volume\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.351950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50a5eb82-d541-4f36-bed3-dda09042ee97-config-volume\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.352016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50a5eb82-d541-4f36-bed3-dda09042ee97-secret-volume\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.352064 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkx9f\" (UniqueName: \"kubernetes.io/projected/50a5eb82-d541-4f36-bed3-dda09042ee97-kube-api-access-nkx9f\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.352862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50a5eb82-d541-4f36-bed3-dda09042ee97-config-volume\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.367604 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50a5eb82-d541-4f36-bed3-dda09042ee97-secret-volume\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.373122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkx9f\" (UniqueName: \"kubernetes.io/projected/50a5eb82-d541-4f36-bed3-dda09042ee97-kube-api-access-nkx9f\") pod \"collect-profiles-29396595-qdx42\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:00 crc kubenswrapper[4858]: I1122 07:15:00.471812 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:08 crc kubenswrapper[4858]: I1122 07:15:08.956574 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:08 crc kubenswrapper[4858]: I1122 07:15:08.957454 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:10 crc kubenswrapper[4858]: E1122 07:15:10.676869 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 07:15:10 crc kubenswrapper[4858]: E1122 07:15:10.677116 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp9jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t5jfs_openshift-marketplace(d9338eed-325b-4c13-bb27-758011490a06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:15:10 crc kubenswrapper[4858]: E1122 07:15:10.678430 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t5jfs" podUID="d9338eed-325b-4c13-bb27-758011490a06" Nov 22 07:15:15 crc kubenswrapper[4858]: I1122 07:15:15.663449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bgn27" event={"ID":"e79f7ebf-0dac-4f86-b3f1-045904313fba","Type":"ContainerStarted","Data":"3d4d32f1f60e58c1fdfaee42c3531a4a9c45e260eca7a79b7d362554c6c39126"} Nov 22 07:15:15 crc kubenswrapper[4858]: I1122 07:15:15.664148 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:15:15 crc kubenswrapper[4858]: I1122 07:15:15.664603 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:15 crc kubenswrapper[4858]: I1122 07:15:15.664699 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:16 crc kubenswrapper[4858]: I1122 07:15:16.669450 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:16 crc kubenswrapper[4858]: I1122 07:15:16.670037 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:17 crc kubenswrapper[4858]: E1122 07:15:17.682002 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t5jfs" podUID="d9338eed-325b-4c13-bb27-758011490a06" Nov 22 07:15:18 crc kubenswrapper[4858]: I1122 07:15:18.956215 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:18 crc kubenswrapper[4858]: I1122 07:15:18.956735 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:18 crc kubenswrapper[4858]: I1122 07:15:18.956433 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:18 crc kubenswrapper[4858]: I1122 07:15:18.957005 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:28 crc kubenswrapper[4858]: I1122 07:15:28.956822 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:28 crc kubenswrapper[4858]: I1122 07:15:28.957467 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:28 crc kubenswrapper[4858]: I1122 07:15:28.956828 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:28 crc kubenswrapper[4858]: I1122 07:15:28.957557 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.955835 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.956577 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.955860 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.956622 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.956647 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.957178 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"3d4d32f1f60e58c1fdfaee42c3531a4a9c45e260eca7a79b7d362554c6c39126"} pod="openshift-console/downloads-7954f5f757-bgn27" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.957213 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" containerID="cri-o://3d4d32f1f60e58c1fdfaee42c3531a4a9c45e260eca7a79b7d362554c6c39126" gracePeriod=2 Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.958481 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:38 crc kubenswrapper[4858]: I1122 07:15:38.958532 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:39 crc kubenswrapper[4858]: I1122 07:15:39.778997 4858 generic.go:334] "Generic (PLEG): container finished" podID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerID="3d4d32f1f60e58c1fdfaee42c3531a4a9c45e260eca7a79b7d362554c6c39126" exitCode=0 Nov 22 07:15:39 crc kubenswrapper[4858]: I1122 07:15:39.779172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bgn27" event={"ID":"e79f7ebf-0dac-4f86-b3f1-045904313fba","Type":"ContainerDied","Data":"3d4d32f1f60e58c1fdfaee42c3531a4a9c45e260eca7a79b7d362554c6c39126"} Nov 22 07:15:39 crc kubenswrapper[4858]: I1122 07:15:39.779487 4858 scope.go:117] "RemoveContainer" containerID="5557b554071f07f0d5955467629f394decbe5255d450690818527409c0156ae5" Nov 22 07:15:43 crc kubenswrapper[4858]: I1122 07:15:43.988597 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42"] Nov 22 07:15:44 crc kubenswrapper[4858]: W1122 07:15:44.393637 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a5eb82_d541_4f36_bed3_dda09042ee97.slice/crio-d73ade06b7a8f4763071cfe0e4d728f614dcbe64321f3787c0267259b1c425a8 WatchSource:0}: Error finding container d73ade06b7a8f4763071cfe0e4d728f614dcbe64321f3787c0267259b1c425a8: Status 404 returned error can't find the container with id d73ade06b7a8f4763071cfe0e4d728f614dcbe64321f3787c0267259b1c425a8 Nov 22 07:15:44 crc kubenswrapper[4858]: I1122 07:15:44.807520 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" event={"ID":"50a5eb82-d541-4f36-bed3-dda09042ee97","Type":"ContainerStarted","Data":"d73ade06b7a8f4763071cfe0e4d728f614dcbe64321f3787c0267259b1c425a8"} Nov 22 07:15:44 crc kubenswrapper[4858]: I1122 07:15:44.811955 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"0cd00a3097c2d15fb4beb8499bb46da6d8a7f79af5f46ffb8eec499c9122cc18"} Nov 22 07:15:45 crc kubenswrapper[4858]: I1122 07:15:45.836249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bgn27" event={"ID":"e79f7ebf-0dac-4f86-b3f1-045904313fba","Type":"ContainerStarted","Data":"7c071812fcd816248449984e4d93d2d3a42797a597cdfd19a607b420c1e1847d"} Nov 22 07:15:45 crc kubenswrapper[4858]: I1122 07:15:45.837283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:15:45 crc kubenswrapper[4858]: I1122 07:15:45.838615 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:45 crc kubenswrapper[4858]: I1122 07:15:45.838669 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:45 crc kubenswrapper[4858]: I1122 07:15:45.839504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerStarted","Data":"9f0c7913a39b74cda2cfa4302b2dc91e2feb6ee57ab4ba4e97b891f324b3988a"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.866286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" event={"ID":"50a5eb82-d541-4f36-bed3-dda09042ee97","Type":"ContainerStarted","Data":"8c3b967ae6e961650bf7999871aa30c469a28a62fdcac606bef5d45c0f0697ec"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.881439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerStarted","Data":"1a03764c1aff4d9570acb0496f60b7838b4fd43cff2d2ccab2a755b05e3085b9"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.893230 4858 generic.go:334] "Generic (PLEG): container finished" podID="a2126ba6-5874-4d63-98e6-1425898e8271" containerID="9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078" exitCode=0 Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.893299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerDied","Data":"9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.905705 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerID="9f0c7913a39b74cda2cfa4302b2dc91e2feb6ee57ab4ba4e97b891f324b3988a" exitCode=0 Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.905795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerDied","Data":"9f0c7913a39b74cda2cfa4302b2dc91e2feb6ee57ab4ba4e97b891f324b3988a"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.910413 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerID="37560644c77b68c13f322a97d0d502ba654d9c9fa18fee2e19523e85b77b8bf5" exitCode=0 Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.910480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerDied","Data":"37560644c77b68c13f322a97d0d502ba654d9c9fa18fee2e19523e85b77b8bf5"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.920952 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerStarted","Data":"92095f2e4f6646d8c093864e04c067ecf8355744662ca8c66bbc5eb4791a94e5"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.929246 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerStarted","Data":"9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.934178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerStarted","Data":"313734ef571d797da9b974927739bfc1aea3affe49c4cfe0905f816ea3303864"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.942623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerStarted","Data":"72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724"} Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.942835 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:46 crc kubenswrapper[4858]: I1122 07:15:46.942882 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.961271 4858 generic.go:334] "Generic (PLEG): container finished" podID="50a5eb82-d541-4f36-bed3-dda09042ee97" containerID="8c3b967ae6e961650bf7999871aa30c469a28a62fdcac606bef5d45c0f0697ec" exitCode=0 Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.961361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" event={"ID":"50a5eb82-d541-4f36-bed3-dda09042ee97","Type":"ContainerDied","Data":"8c3b967ae6e961650bf7999871aa30c469a28a62fdcac606bef5d45c0f0697ec"} Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.964552 4858 generic.go:334] "Generic (PLEG): container finished" podID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerID="9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7" exitCode=0 Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.964615 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerDied","Data":"9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7"} Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.968604 4858 generic.go:334] "Generic (PLEG): container finished" podID="96d66887-e193-43e1-94c9-932220bee7a2" containerID="1a03764c1aff4d9570acb0496f60b7838b4fd43cff2d2ccab2a755b05e3085b9" exitCode=0 Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.969554 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerDied","Data":"1a03764c1aff4d9570acb0496f60b7838b4fd43cff2d2ccab2a755b05e3085b9"} Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.977945 4858 generic.go:334] "Generic (PLEG): container finished" podID="406d65df-4a13-40bf-93c3-48a06797a79b" containerID="313734ef571d797da9b974927739bfc1aea3affe49c4cfe0905f816ea3303864" exitCode=0 Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.978676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerDied","Data":"313734ef571d797da9b974927739bfc1aea3affe49c4cfe0905f816ea3303864"} Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.979939 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:47 crc kubenswrapper[4858]: I1122 07:15:47.979971 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.453640 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.609872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50a5eb82-d541-4f36-bed3-dda09042ee97-config-volume\") pod \"50a5eb82-d541-4f36-bed3-dda09042ee97\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.610350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50a5eb82-d541-4f36-bed3-dda09042ee97-secret-volume\") pod \"50a5eb82-d541-4f36-bed3-dda09042ee97\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.610373 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkx9f\" (UniqueName: \"kubernetes.io/projected/50a5eb82-d541-4f36-bed3-dda09042ee97-kube-api-access-nkx9f\") pod \"50a5eb82-d541-4f36-bed3-dda09042ee97\" (UID: \"50a5eb82-d541-4f36-bed3-dda09042ee97\") " Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.610915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a5eb82-d541-4f36-bed3-dda09042ee97-config-volume" (OuterVolumeSpecName: "config-volume") pod "50a5eb82-d541-4f36-bed3-dda09042ee97" (UID: "50a5eb82-d541-4f36-bed3-dda09042ee97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.634957 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a5eb82-d541-4f36-bed3-dda09042ee97-kube-api-access-nkx9f" (OuterVolumeSpecName: "kube-api-access-nkx9f") pod "50a5eb82-d541-4f36-bed3-dda09042ee97" (UID: "50a5eb82-d541-4f36-bed3-dda09042ee97"). InnerVolumeSpecName "kube-api-access-nkx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.642531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a5eb82-d541-4f36-bed3-dda09042ee97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50a5eb82-d541-4f36-bed3-dda09042ee97" (UID: "50a5eb82-d541-4f36-bed3-dda09042ee97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.711354 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50a5eb82-d541-4f36-bed3-dda09042ee97-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.711385 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkx9f\" (UniqueName: \"kubernetes.io/projected/50a5eb82-d541-4f36-bed3-dda09042ee97-kube-api-access-nkx9f\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.711395 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50a5eb82-d541-4f36-bed3-dda09042ee97-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.957662 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.957717 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.957878 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.957909 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.986189 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" event={"ID":"50a5eb82-d541-4f36-bed3-dda09042ee97","Type":"ContainerDied","Data":"d73ade06b7a8f4763071cfe0e4d728f614dcbe64321f3787c0267259b1c425a8"} Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.986231 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d73ade06b7a8f4763071cfe0e4d728f614dcbe64321f3787c0267259b1c425a8" Nov 22 07:15:48 crc kubenswrapper[4858]: I1122 07:15:48.986289 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42" Nov 22 07:15:50 crc kubenswrapper[4858]: I1122 07:15:50.027918 4858 generic.go:334] "Generic (PLEG): container finished" podID="d9338eed-325b-4c13-bb27-758011490a06" containerID="92095f2e4f6646d8c093864e04c067ecf8355744662ca8c66bbc5eb4791a94e5" exitCode=0 Nov 22 07:15:50 crc kubenswrapper[4858]: I1122 07:15:50.027965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerDied","Data":"92095f2e4f6646d8c093864e04c067ecf8355744662ca8c66bbc5eb4791a94e5"} Nov 22 07:15:51 crc kubenswrapper[4858]: I1122 07:15:51.035135 4858 generic.go:334] "Generic (PLEG): container finished" podID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerID="72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724" exitCode=0 Nov 22 07:15:51 crc kubenswrapper[4858]: I1122 07:15:51.035210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerDied","Data":"72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724"} Nov 22 07:15:58 crc kubenswrapper[4858]: I1122 07:15:58.956439 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:58 crc kubenswrapper[4858]: I1122 07:15:58.957116 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:15:58 crc kubenswrapper[4858]: I1122 07:15:58.956542 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-bgn27 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 22 07:15:58 crc kubenswrapper[4858]: I1122 07:15:58.957171 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bgn27" podUID="e79f7ebf-0dac-4f86-b3f1-045904313fba" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 22 07:16:08 crc kubenswrapper[4858]: I1122 07:16:08.961179 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bgn27" Nov 22 07:16:10 crc kubenswrapper[4858]: I1122 07:16:10.671553 4858 patch_prober.go:28] interesting pod/router-default-5444994796-xllks container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:16:10 crc kubenswrapper[4858]: I1122 07:16:10.671931 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xllks" podUID="dba0e5a3-9474-4dde-a3c3-52390b657290" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:17:26 crc kubenswrapper[4858]: I1122 07:17:26.517177 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerStarted","Data":"46932b2a046e902019aa76d64807369e8f9fd1e59e9923277569ec82bc09310c"} Nov 22 07:17:27 crc kubenswrapper[4858]: I1122 07:17:27.527798 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerStarted","Data":"4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b"} Nov 22 07:17:27 crc kubenswrapper[4858]: I1122 07:17:27.531924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerStarted","Data":"157a7d118d0943a3e2591459dde27cbe4d4e22e0d456977a77b81e61d4a4f076"} Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.538872 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerStarted","Data":"8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1"} Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.542042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerStarted","Data":"28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305"} Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.544074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerStarted","Data":"3d6dabea18744496bf745ce6a25f5ec8350ab84c3a782f9c69fd5104e7ade772"} Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.546117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerStarted","Data":"ab999838ffc3eb4132adacc4e657859aac89c35bedef797cc7058257f87dd16a"} Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.559440 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerStarted","Data":"1ac433ea727854dbbc6eff944b4273b384632bb95fd6bfc2f3c95e81118da847"} Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.567796 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpkvd" podStartSLOduration=8.680385108 podStartE2EDuration="3m58.567770369s" podCreationTimestamp="2025-11-22 07:13:30 +0000 UTC" firstStartedPulling="2025-11-22 07:13:31.99222471 +0000 UTC m=+173.833647716" lastFinishedPulling="2025-11-22 07:17:21.879609971 +0000 UTC m=+403.721032977" observedRunningTime="2025-11-22 07:17:28.564418499 +0000 UTC m=+410.405841515" watchObservedRunningTime="2025-11-22 07:17:28.567770369 +0000 UTC m=+410.409193405" Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.583790 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gllgw" podStartSLOduration=41.331593513 podStartE2EDuration="3m57.583773575s" podCreationTimestamp="2025-11-22 07:13:31 +0000 UTC" firstStartedPulling="2025-11-22 07:13:38.060253132 +0000 UTC m=+179.901676138" lastFinishedPulling="2025-11-22 07:16:54.312433194 +0000 UTC m=+376.153856200" observedRunningTime="2025-11-22 07:17:28.581223421 +0000 UTC m=+410.422646427" watchObservedRunningTime="2025-11-22 07:17:28.583773575 +0000 UTC m=+410.425196581" Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.603612 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gg4fx" podStartSLOduration=16.571654019 podStartE2EDuration="3m59.603590246s" podCreationTimestamp="2025-11-22 07:13:29 +0000 UTC" firstStartedPulling="2025-11-22 07:13:31.970999791 +0000 UTC m=+173.812422797" lastFinishedPulling="2025-11-22 07:17:15.002936028 +0000 UTC m=+396.844359024" observedRunningTime="2025-11-22 07:17:28.598275282 +0000 UTC m=+410.439698298" watchObservedRunningTime="2025-11-22 07:17:28.603590246 +0000 UTC m=+410.445013252" Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.621423 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fhllm" podStartSLOduration=12.805301096 podStartE2EDuration="3m56.621404612s" podCreationTimestamp="2025-11-22 07:13:32 +0000 UTC" firstStartedPulling="2025-11-22 07:13:38.063505315 +0000 UTC m=+179.904928321" lastFinishedPulling="2025-11-22 07:17:21.879608831 +0000 UTC m=+403.721031837" observedRunningTime="2025-11-22 07:17:28.618452215 +0000 UTC m=+410.459875231" watchObservedRunningTime="2025-11-22 07:17:28.621404612 +0000 UTC m=+410.462827618" Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.636860 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5jfs" podStartSLOduration=27.285706235 podStartE2EDuration="3m55.636824579s" podCreationTimestamp="2025-11-22 07:13:33 +0000 UTC" firstStartedPulling="2025-11-22 07:13:39.124485088 +0000 UTC m=+180.965908084" lastFinishedPulling="2025-11-22 07:17:07.475603412 +0000 UTC m=+389.317026428" observedRunningTime="2025-11-22 07:17:28.6365783 +0000 UTC m=+410.478001296" watchObservedRunningTime="2025-11-22 07:17:28.636824579 +0000 UTC m=+410.478247585" Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.654808 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knj5m" podStartSLOduration=24.517955355 podStartE2EDuration="3m55.65478769s" podCreationTimestamp="2025-11-22 07:13:33 +0000 UTC" firstStartedPulling="2025-11-22 07:13:38.057186417 +0000 UTC m=+179.898609423" lastFinishedPulling="2025-11-22 07:17:09.194018752 +0000 UTC m=+391.035441758" observedRunningTime="2025-11-22 07:17:28.650524279 +0000 UTC m=+410.491947285" watchObservedRunningTime="2025-11-22 07:17:28.65478769 +0000 UTC m=+410.496210716" Nov 22 07:17:28 crc kubenswrapper[4858]: I1122 07:17:28.672858 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qwxv2" podStartSLOduration=10.81007426 podStartE2EDuration="3m58.672836482s" podCreationTimestamp="2025-11-22 07:13:30 +0000 UTC" firstStartedPulling="2025-11-22 07:13:34.016547369 +0000 UTC m=+175.857970375" lastFinishedPulling="2025-11-22 07:17:21.879309591 +0000 UTC m=+403.720732597" observedRunningTime="2025-11-22 07:17:28.671316413 +0000 UTC m=+410.512739419" watchObservedRunningTime="2025-11-22 07:17:28.672836482 +0000 UTC m=+410.514259488" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.290990 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.291070 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.444335 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.444389 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.702595 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.702900 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.897844 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:17:30 crc kubenswrapper[4858]: I1122 07:17:30.897911 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:17:31 crc kubenswrapper[4858]: I1122 07:17:31.861040 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gg4fx" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="registry-server" probeResult="failure" output=< Nov 22 07:17:31 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:17:31 crc kubenswrapper[4858]: > Nov 22 07:17:31 crc kubenswrapper[4858]: I1122 07:17:31.862306 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bpkvd" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="registry-server" probeResult="failure" output=< Nov 22 07:17:31 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:17:31 crc kubenswrapper[4858]: > Nov 22 07:17:31 crc kubenswrapper[4858]: I1122 07:17:31.866184 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qwxv2" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="registry-server" probeResult="failure" output=< Nov 22 07:17:31 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:17:31 crc kubenswrapper[4858]: > Nov 22 07:17:31 crc kubenswrapper[4858]: I1122 07:17:31.936885 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gm4fm" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="registry-server" probeResult="failure" output=< Nov 22 07:17:31 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:17:31 crc kubenswrapper[4858]: > Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.230473 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.230535 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.356830 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.377261 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gm4fm" podStartSLOduration=28.209335681 podStartE2EDuration="4m2.37724524s" podCreationTimestamp="2025-11-22 07:13:30 +0000 UTC" firstStartedPulling="2025-11-22 07:13:35.025490772 +0000 UTC m=+176.866913778" lastFinishedPulling="2025-11-22 07:17:09.193400341 +0000 UTC m=+391.034823337" observedRunningTime="2025-11-22 07:17:28.692225849 +0000 UTC m=+410.533648865" watchObservedRunningTime="2025-11-22 07:17:32.37724524 +0000 UTC m=+414.218668246" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.618983 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.649116 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.649187 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:17:32 crc kubenswrapper[4858]: I1122 07:17:32.684100 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:17:33 crc kubenswrapper[4858]: I1122 07:17:33.622360 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:17:33 crc kubenswrapper[4858]: I1122 07:17:33.622800 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:17:33 crc kubenswrapper[4858]: I1122 07:17:33.622943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:17:34 crc kubenswrapper[4858]: I1122 07:17:34.036960 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:17:34 crc kubenswrapper[4858]: I1122 07:17:34.037017 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:17:34 crc kubenswrapper[4858]: I1122 07:17:34.659599 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knj5m" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="registry-server" probeResult="failure" output=< Nov 22 07:17:34 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:17:34 crc kubenswrapper[4858]: > Nov 22 07:17:34 crc kubenswrapper[4858]: I1122 07:17:34.973333 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhllm"] Nov 22 07:17:35 crc kubenswrapper[4858]: I1122 07:17:35.073525 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5jfs" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="registry-server" probeResult="failure" output=< Nov 22 07:17:35 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:17:35 crc kubenswrapper[4858]: > Nov 22 07:17:35 crc kubenswrapper[4858]: I1122 07:17:35.591599 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fhllm" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="registry-server" containerID="cri-o://157a7d118d0943a3e2591459dde27cbe4d4e22e0d456977a77b81e61d4a4f076" gracePeriod=2 Nov 22 07:17:37 crc kubenswrapper[4858]: I1122 07:17:37.603503 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerID="157a7d118d0943a3e2591459dde27cbe4d4e22e0d456977a77b81e61d4a4f076" exitCode=0 Nov 22 07:17:37 crc kubenswrapper[4858]: I1122 07:17:37.603540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerDied","Data":"157a7d118d0943a3e2591459dde27cbe4d4e22e0d456977a77b81e61d4a4f076"} Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.254404 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.384887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-catalog-content\") pod \"6ca5c366-b319-4f1d-a936-e070dd85d876\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.385250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-utilities\") pod \"6ca5c366-b319-4f1d-a936-e070dd85d876\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.385275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw8dg\" (UniqueName: \"kubernetes.io/projected/6ca5c366-b319-4f1d-a936-e070dd85d876-kube-api-access-qw8dg\") pod \"6ca5c366-b319-4f1d-a936-e070dd85d876\" (UID: \"6ca5c366-b319-4f1d-a936-e070dd85d876\") " Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.387425 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-utilities" (OuterVolumeSpecName: "utilities") pod "6ca5c366-b319-4f1d-a936-e070dd85d876" (UID: "6ca5c366-b319-4f1d-a936-e070dd85d876"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.390479 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca5c366-b319-4f1d-a936-e070dd85d876-kube-api-access-qw8dg" (OuterVolumeSpecName: "kube-api-access-qw8dg") pod "6ca5c366-b319-4f1d-a936-e070dd85d876" (UID: "6ca5c366-b319-4f1d-a936-e070dd85d876"). InnerVolumeSpecName "kube-api-access-qw8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.405344 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ca5c366-b319-4f1d-a936-e070dd85d876" (UID: "6ca5c366-b319-4f1d-a936-e070dd85d876"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.486432 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.486467 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca5c366-b319-4f1d-a936-e070dd85d876-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.486490 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw8dg\" (UniqueName: \"kubernetes.io/projected/6ca5c366-b319-4f1d-a936-e070dd85d876-kube-api-access-qw8dg\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.615245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhllm" event={"ID":"6ca5c366-b319-4f1d-a936-e070dd85d876","Type":"ContainerDied","Data":"cb650d9287d9a3aa4c7d6471a39789c0370be689fdfebca357bd07d5fdd0bec7"} Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.615294 4858 scope.go:117] "RemoveContainer" containerID="157a7d118d0943a3e2591459dde27cbe4d4e22e0d456977a77b81e61d4a4f076" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.615268 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhllm" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.640469 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhllm"] Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.643339 4858 scope.go:117] "RemoveContainer" containerID="37560644c77b68c13f322a97d0d502ba654d9c9fa18fee2e19523e85b77b8bf5" Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.643386 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhllm"] Nov 22 07:17:39 crc kubenswrapper[4858]: I1122 07:17:39.654864 4858 scope.go:117] "RemoveContainer" containerID="0e9d75561acc3cbb460591fb356a6d7c8dde00cbfcfd7c0134b68bdc5b7cb454" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.332919 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.377091 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.482218 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.521669 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.741383 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.781958 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.936490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:17:40 crc kubenswrapper[4858]: I1122 07:17:40.975573 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:17:41 crc kubenswrapper[4858]: I1122 07:17:41.544440 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" path="/var/lib/kubelet/pods/6ca5c366-b319-4f1d-a936-e070dd85d876/volumes" Nov 22 07:17:42 crc kubenswrapper[4858]: I1122 07:17:42.573421 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qwxv2"] Nov 22 07:17:42 crc kubenswrapper[4858]: I1122 07:17:42.631510 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qwxv2" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="registry-server" containerID="cri-o://ab999838ffc3eb4132adacc4e657859aac89c35bedef797cc7058257f87dd16a" gracePeriod=2 Nov 22 07:17:43 crc kubenswrapper[4858]: I1122 07:17:43.663620 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:17:43 crc kubenswrapper[4858]: I1122 07:17:43.705344 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:17:43 crc kubenswrapper[4858]: I1122 07:17:43.974162 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gm4fm"] Nov 22 07:17:43 crc kubenswrapper[4858]: I1122 07:17:43.974410 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gm4fm" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="registry-server" containerID="cri-o://46932b2a046e902019aa76d64807369e8f9fd1e59e9923277569ec82bc09310c" gracePeriod=2 Nov 22 07:17:44 crc kubenswrapper[4858]: I1122 07:17:44.077501 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:17:44 crc kubenswrapper[4858]: I1122 07:17:44.114889 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:17:44 crc kubenswrapper[4858]: I1122 07:17:44.645597 4858 generic.go:334] "Generic (PLEG): container finished" podID="406d65df-4a13-40bf-93c3-48a06797a79b" containerID="ab999838ffc3eb4132adacc4e657859aac89c35bedef797cc7058257f87dd16a" exitCode=0 Nov 22 07:17:44 crc kubenswrapper[4858]: I1122 07:17:44.645690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerDied","Data":"ab999838ffc3eb4132adacc4e657859aac89c35bedef797cc7058257f87dd16a"} Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.312514 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.312831 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.521152 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.654113 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxv2" event={"ID":"406d65df-4a13-40bf-93c3-48a06797a79b","Type":"ContainerDied","Data":"1e6554b066d9b6bb33ab670875fdfae360a4590968a0efe7c6afab61d1f34ac5"} Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.654158 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwxv2" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.654176 4858 scope.go:117] "RemoveContainer" containerID="ab999838ffc3eb4132adacc4e657859aac89c35bedef797cc7058257f87dd16a" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.667978 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8h6v\" (UniqueName: \"kubernetes.io/projected/406d65df-4a13-40bf-93c3-48a06797a79b-kube-api-access-h8h6v\") pod \"406d65df-4a13-40bf-93c3-48a06797a79b\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.668068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-catalog-content\") pod \"406d65df-4a13-40bf-93c3-48a06797a79b\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.668170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-utilities\") pod \"406d65df-4a13-40bf-93c3-48a06797a79b\" (UID: \"406d65df-4a13-40bf-93c3-48a06797a79b\") " Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.669157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-utilities" (OuterVolumeSpecName: "utilities") pod "406d65df-4a13-40bf-93c3-48a06797a79b" (UID: "406d65df-4a13-40bf-93c3-48a06797a79b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.671019 4858 scope.go:117] "RemoveContainer" containerID="313734ef571d797da9b974927739bfc1aea3affe49c4cfe0905f816ea3303864" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.674936 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406d65df-4a13-40bf-93c3-48a06797a79b-kube-api-access-h8h6v" (OuterVolumeSpecName: "kube-api-access-h8h6v") pod "406d65df-4a13-40bf-93c3-48a06797a79b" (UID: "406d65df-4a13-40bf-93c3-48a06797a79b"). InnerVolumeSpecName "kube-api-access-h8h6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.706254 4858 scope.go:117] "RemoveContainer" containerID="ae45a93d9af9c15bf11c0b5aca40ddc6f588cd7afde36598af092e1549ab52e3" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.716251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406d65df-4a13-40bf-93c3-48a06797a79b" (UID: "406d65df-4a13-40bf-93c3-48a06797a79b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.769822 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.770051 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406d65df-4a13-40bf-93c3-48a06797a79b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.770148 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8h6v\" (UniqueName: \"kubernetes.io/projected/406d65df-4a13-40bf-93c3-48a06797a79b-kube-api-access-h8h6v\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.993544 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qwxv2"] Nov 22 07:17:45 crc kubenswrapper[4858]: I1122 07:17:45.996958 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qwxv2"] Nov 22 07:17:46 crc kubenswrapper[4858]: I1122 07:17:46.377201 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5jfs"] Nov 22 07:17:46 crc kubenswrapper[4858]: I1122 07:17:46.377696 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5jfs" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="registry-server" containerID="cri-o://1ac433ea727854dbbc6eff944b4273b384632bb95fd6bfc2f3c95e81118da847" gracePeriod=2 Nov 22 07:17:46 crc kubenswrapper[4858]: I1122 07:17:46.662781 4858 generic.go:334] "Generic (PLEG): container finished" podID="96d66887-e193-43e1-94c9-932220bee7a2" containerID="46932b2a046e902019aa76d64807369e8f9fd1e59e9923277569ec82bc09310c" exitCode=0 Nov 22 07:17:46 crc kubenswrapper[4858]: I1122 07:17:46.662824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerDied","Data":"46932b2a046e902019aa76d64807369e8f9fd1e59e9923277569ec82bc09310c"} Nov 22 07:17:47 crc kubenswrapper[4858]: I1122 07:17:47.542774 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" path="/var/lib/kubelet/pods/406d65df-4a13-40bf-93c3-48a06797a79b/volumes" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.446805 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.536553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-catalog-content\") pod \"96d66887-e193-43e1-94c9-932220bee7a2\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.536875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-utilities\") pod \"96d66887-e193-43e1-94c9-932220bee7a2\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.537027 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsc28\" (UniqueName: \"kubernetes.io/projected/96d66887-e193-43e1-94c9-932220bee7a2-kube-api-access-hsc28\") pod \"96d66887-e193-43e1-94c9-932220bee7a2\" (UID: \"96d66887-e193-43e1-94c9-932220bee7a2\") " Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.538398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-utilities" (OuterVolumeSpecName: "utilities") pod "96d66887-e193-43e1-94c9-932220bee7a2" (UID: "96d66887-e193-43e1-94c9-932220bee7a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.542507 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d66887-e193-43e1-94c9-932220bee7a2-kube-api-access-hsc28" (OuterVolumeSpecName: "kube-api-access-hsc28") pod "96d66887-e193-43e1-94c9-932220bee7a2" (UID: "96d66887-e193-43e1-94c9-932220bee7a2"). InnerVolumeSpecName "kube-api-access-hsc28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.587935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96d66887-e193-43e1-94c9-932220bee7a2" (UID: "96d66887-e193-43e1-94c9-932220bee7a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.638744 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsc28\" (UniqueName: \"kubernetes.io/projected/96d66887-e193-43e1-94c9-932220bee7a2-kube-api-access-hsc28\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.638798 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.638813 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96d66887-e193-43e1-94c9-932220bee7a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.687309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gm4fm" event={"ID":"96d66887-e193-43e1-94c9-932220bee7a2","Type":"ContainerDied","Data":"491a0b6165c7c2a598d54cb17cf49073173b5b1dc4343e894ab432b23e889203"} Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.687396 4858 scope.go:117] "RemoveContainer" containerID="46932b2a046e902019aa76d64807369e8f9fd1e59e9923277569ec82bc09310c" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.687540 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gm4fm" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.692430 4858 generic.go:334] "Generic (PLEG): container finished" podID="d9338eed-325b-4c13-bb27-758011490a06" containerID="1ac433ea727854dbbc6eff944b4273b384632bb95fd6bfc2f3c95e81118da847" exitCode=0 Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.692478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerDied","Data":"1ac433ea727854dbbc6eff944b4273b384632bb95fd6bfc2f3c95e81118da847"} Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.703894 4858 scope.go:117] "RemoveContainer" containerID="1a03764c1aff4d9570acb0496f60b7838b4fd43cff2d2ccab2a755b05e3085b9" Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.718866 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gm4fm"] Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.722037 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gm4fm"] Nov 22 07:17:50 crc kubenswrapper[4858]: I1122 07:17:50.739370 4858 scope.go:117] "RemoveContainer" containerID="850fe739c6dde3b7f82e38d33b2865214ebcbb70e26513f03cf8f329daf442ab" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.456039 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.543754 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d66887-e193-43e1-94c9-932220bee7a2" path="/var/lib/kubelet/pods/96d66887-e193-43e1-94c9-932220bee7a2/volumes" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.551441 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-catalog-content\") pod \"d9338eed-325b-4c13-bb27-758011490a06\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.551486 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp9jf\" (UniqueName: \"kubernetes.io/projected/d9338eed-325b-4c13-bb27-758011490a06-kube-api-access-jp9jf\") pod \"d9338eed-325b-4c13-bb27-758011490a06\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.551528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-utilities\") pod \"d9338eed-325b-4c13-bb27-758011490a06\" (UID: \"d9338eed-325b-4c13-bb27-758011490a06\") " Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.552787 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-utilities" (OuterVolumeSpecName: "utilities") pod "d9338eed-325b-4c13-bb27-758011490a06" (UID: "d9338eed-325b-4c13-bb27-758011490a06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.558507 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9338eed-325b-4c13-bb27-758011490a06-kube-api-access-jp9jf" (OuterVolumeSpecName: "kube-api-access-jp9jf") pod "d9338eed-325b-4c13-bb27-758011490a06" (UID: "d9338eed-325b-4c13-bb27-758011490a06"). InnerVolumeSpecName "kube-api-access-jp9jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.635532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9338eed-325b-4c13-bb27-758011490a06" (UID: "d9338eed-325b-4c13-bb27-758011490a06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.653512 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.653560 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp9jf\" (UniqueName: \"kubernetes.io/projected/d9338eed-325b-4c13-bb27-758011490a06-kube-api-access-jp9jf\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.653574 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9338eed-325b-4c13-bb27-758011490a06-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.704016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5jfs" event={"ID":"d9338eed-325b-4c13-bb27-758011490a06","Type":"ContainerDied","Data":"046e84ebaa4ae1a167c52aa3ed42721bc92825821938a0a6dc55cd1c3b4d4157"} Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.704068 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5jfs" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.704113 4858 scope.go:117] "RemoveContainer" containerID="1ac433ea727854dbbc6eff944b4273b384632bb95fd6bfc2f3c95e81118da847" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.735988 4858 scope.go:117] "RemoveContainer" containerID="92095f2e4f6646d8c093864e04c067ecf8355744662ca8c66bbc5eb4791a94e5" Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.737640 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5jfs"] Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.741548 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5jfs"] Nov 22 07:17:51 crc kubenswrapper[4858]: I1122 07:17:51.768387 4858 scope.go:117] "RemoveContainer" containerID="3414ce1a6af74fcdbd9d73b94001654e7b76c6e46487f0a7cec48672920219af" Nov 22 07:17:53 crc kubenswrapper[4858]: I1122 07:17:53.545530 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9338eed-325b-4c13-bb27-758011490a06" path="/var/lib/kubelet/pods/d9338eed-325b-4c13-bb27-758011490a06/volumes" Nov 22 07:18:03 crc kubenswrapper[4858]: I1122 07:18:03.043148 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4wlm"] Nov 22 07:18:15 crc kubenswrapper[4858]: I1122 07:18:15.311678 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:18:15 crc kubenswrapper[4858]: I1122 07:18:15.312201 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.077062 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" podUID="62cf2e91-277d-4243-93f5-7cc9416f3f6e" containerName="oauth-openshift" containerID="cri-o://5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616" gracePeriod=15 Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.438879 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473609 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7845fc8b9c-fskcv"] Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473851 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473863 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473873 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473881 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473894 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cf2e91-277d-4243-93f5-7cc9416f3f6e" containerName="oauth-openshift" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473901 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cf2e91-277d-4243-93f5-7cc9416f3f6e" containerName="oauth-openshift" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473911 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473916 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473925 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473931 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473940 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473946 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473956 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473962 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473971 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473977 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.473985 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.473990 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.474001 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a5eb82-d541-4f36-bed3-dda09042ee97" containerName="collect-profiles" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474006 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a5eb82-d541-4f36-bed3-dda09042ee97" containerName="collect-profiles" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.474017 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474025 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.474036 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474043 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="extract-content" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.474051 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474058 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.474067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474074 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="extract-utilities" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474158 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cf2e91-277d-4243-93f5-7cc9416f3f6e" containerName="oauth-openshift" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474170 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="406d65df-4a13-40bf-93c3-48a06797a79b" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474177 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca5c366-b319-4f1d-a936-e070dd85d876" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474188 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a5eb82-d541-4f36-bed3-dda09042ee97" containerName="collect-profiles" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474196 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d66887-e193-43e1-94c9-932220bee7a2" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474212 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9338eed-325b-4c13-bb27-758011490a06" containerName="registry-server" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.474634 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.488073 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7845fc8b9c-fskcv"] Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-cliconfig\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-ocp-branding-template\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499407 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-session\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4bv2\" (UniqueName: \"kubernetes.io/projected/62cf2e91-277d-4243-93f5-7cc9416f3f6e-kube-api-access-h4bv2\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-idp-0-file-data\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499465 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-error\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-dir\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-router-certs\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-provider-selection\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499539 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-policies\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499561 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-serving-cert\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-login\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-service-ca\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-trusted-ca-bundle\") pod \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\" (UID: \"62cf2e91-277d-4243-93f5-7cc9416f3f6e\") " Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499732 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-audit-dir\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499845 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-session\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-audit-policies\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-error\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.499983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.500018 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.500045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tpb\" (UniqueName: \"kubernetes.io/projected/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-kube-api-access-26tpb\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.500075 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.500101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-login\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.500173 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.500825 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.501137 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.501433 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.502396 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.506509 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.506840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cf2e91-277d-4243-93f5-7cc9416f3f6e-kube-api-access-h4bv2" (OuterVolumeSpecName: "kube-api-access-h4bv2") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "kube-api-access-h4bv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.510207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.510418 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.510640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.513687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.514851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.516249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.519385 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "62cf2e91-277d-4243-93f5-7cc9416f3f6e" (UID: "62cf2e91-277d-4243-93f5-7cc9416f3f6e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-audit-dir\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-session\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-audit-policies\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-error\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-audit-dir\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.600969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tpb\" (UniqueName: \"kubernetes.io/projected/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-kube-api-access-26tpb\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-login\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601280 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601295 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4bv2\" (UniqueName: \"kubernetes.io/projected/62cf2e91-277d-4243-93f5-7cc9416f3f6e-kube-api-access-h4bv2\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601307 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601364 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601377 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601390 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601403 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601418 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601430 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601443 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601455 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601467 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601480 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.601493 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62cf2e91-277d-4243-93f5-7cc9416f3f6e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.602841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.603123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-audit-policies\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.603275 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.603482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.604798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.604919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.605185 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-error\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.605379 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.605438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-session\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.605714 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-template-login\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.606494 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.608167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.618897 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tpb\" (UniqueName: \"kubernetes.io/projected/47202320-1c9f-4e97-ad4b-2f5a4e1471fd-kube-api-access-26tpb\") pod \"oauth-openshift-7845fc8b9c-fskcv\" (UID: \"47202320-1c9f-4e97-ad4b-2f5a4e1471fd\") " pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.790971 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.888645 4858 generic.go:334] "Generic (PLEG): container finished" podID="62cf2e91-277d-4243-93f5-7cc9416f3f6e" containerID="5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616" exitCode=0 Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.888703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" event={"ID":"62cf2e91-277d-4243-93f5-7cc9416f3f6e","Type":"ContainerDied","Data":"5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616"} Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.888750 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.888778 4858 scope.go:117] "RemoveContainer" containerID="5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.888761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4wlm" event={"ID":"62cf2e91-277d-4243-93f5-7cc9416f3f6e","Type":"ContainerDied","Data":"880a985d92139278cf4e4daf4ee4461caaa14bb56767eb18ebb7d5b99f989a30"} Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.913430 4858 scope.go:117] "RemoveContainer" containerID="5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616" Nov 22 07:18:28 crc kubenswrapper[4858]: E1122 07:18:28.913885 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616\": container with ID starting with 5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616 not found: ID does not exist" containerID="5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.913924 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616"} err="failed to get container status \"5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616\": rpc error: code = NotFound desc = could not find container \"5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616\": container with ID starting with 5ebbdbc6eb5e750e618e0255c1a2c917638c2b5b24571c1271126a2ef2512616 not found: ID does not exist" Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.922708 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4wlm"] Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.931945 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4wlm"] Nov 22 07:18:28 crc kubenswrapper[4858]: I1122 07:18:28.993094 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7845fc8b9c-fskcv"] Nov 22 07:18:29 crc kubenswrapper[4858]: I1122 07:18:29.544002 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cf2e91-277d-4243-93f5-7cc9416f3f6e" path="/var/lib/kubelet/pods/62cf2e91-277d-4243-93f5-7cc9416f3f6e/volumes" Nov 22 07:18:29 crc kubenswrapper[4858]: I1122 07:18:29.903053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" event={"ID":"47202320-1c9f-4e97-ad4b-2f5a4e1471fd","Type":"ContainerStarted","Data":"add544faf3019ab1c8f83661fdf7c47496bf6cd834b6b790f0449b7cca180a5c"} Nov 22 07:18:29 crc kubenswrapper[4858]: I1122 07:18:29.903134 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" event={"ID":"47202320-1c9f-4e97-ad4b-2f5a4e1471fd","Type":"ContainerStarted","Data":"3f7b6223db55c886ee9a057d1e3377c9f3eee9735fc2e4f3a325b718acdbe8bf"} Nov 22 07:18:29 crc kubenswrapper[4858]: I1122 07:18:29.903259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:29 crc kubenswrapper[4858]: I1122 07:18:29.908299 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" Nov 22 07:18:29 crc kubenswrapper[4858]: I1122 07:18:29.935600 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7845fc8b9c-fskcv" podStartSLOduration=26.935578157 podStartE2EDuration="26.935578157s" podCreationTimestamp="2025-11-22 07:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:18:29.920810984 +0000 UTC m=+471.762233990" watchObservedRunningTime="2025-11-22 07:18:29.935578157 +0000 UTC m=+471.777001163" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.240170 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gg4fx"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.241622 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gg4fx" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="registry-server" containerID="cri-o://3d6dabea18744496bf745ce6a25f5ec8350ab84c3a782f9c69fd5104e7ade772" gracePeriod=30 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.243729 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpkvd"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.244056 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpkvd" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="registry-server" containerID="cri-o://8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1" gracePeriod=30 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.263630 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qbgwx"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.263859 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerName="marketplace-operator" containerID="cri-o://9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a" gracePeriod=30 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.285886 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllgw"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.286209 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gllgw" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="registry-server" containerID="cri-o://28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" gracePeriod=30 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.314530 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knj5m"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.314992 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-knj5m" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="registry-server" containerID="cri-o://4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b" gracePeriod=30 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.327747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l8tgd"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.334476 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: E1122 07:18:42.337929 4858 log.go:32] "ExecSync cmd from runtime service failed" err=< Nov 22 07:18:42 crc kubenswrapper[4858]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Nov 22 07:18:42 crc kubenswrapper[4858]: fail startup Nov 22 07:18:42 crc kubenswrapper[4858]: , stdout: , stderr: , exit code -1 Nov 22 07:18:42 crc kubenswrapper[4858]: > containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.338123 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gllgw" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="registry-server" probeResult="failure" output="" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.339606 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l8tgd"] Nov 22 07:18:42 crc kubenswrapper[4858]: E1122 07:18:42.346063 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305 is running failed: container process not found" containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:18:42 crc kubenswrapper[4858]: E1122 07:18:42.347047 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305 is running failed: container process not found" containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:18:42 crc kubenswrapper[4858]: E1122 07:18:42.347104 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gllgw" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="registry-server" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.470653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.470724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.470768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xqz\" (UniqueName: \"kubernetes.io/projected/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-kube-api-access-s8xqz\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.571899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.571969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.572008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xqz\" (UniqueName: \"kubernetes.io/projected/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-kube-api-access-s8xqz\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.574102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.581125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.592912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xqz\" (UniqueName: \"kubernetes.io/projected/df9ea739-ad0f-419c-8e7a-ead3aebbe71f-kube-api-access-s8xqz\") pod \"marketplace-operator-79b997595-l8tgd\" (UID: \"df9ea739-ad0f-419c-8e7a-ead3aebbe71f\") " pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.735761 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.745562 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.747163 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.758113 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.774913 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.874819 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-catalog-content\") pod \"a2126ba6-5874-4d63-98e6-1425898e8271\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875222 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b778r\" (UniqueName: \"kubernetes.io/projected/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-kube-api-access-b778r\") pod \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-catalog-content\") pod \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-trusted-ca\") pod \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plx6g\" (UniqueName: \"kubernetes.io/projected/a59d35e7-c68d-4908-aa12-e587cf1a65ea-kube-api-access-plx6g\") pod \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875372 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-catalog-content\") pod \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-operator-metrics\") pod \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875435 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgzhg\" (UniqueName: \"kubernetes.io/projected/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-kube-api-access-rgzhg\") pod \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\" (UID: \"febf775b-8c73-4ff1-99a0-ef53e4f20cd1\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875467 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-utilities\") pod \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\" (UID: \"35fabf16-e20c-44d3-aa61-d3e9b881ab4e\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-utilities\") pod \"a2126ba6-5874-4d63-98e6-1425898e8271\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-utilities\") pod \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\" (UID: \"a59d35e7-c68d-4908-aa12-e587cf1a65ea\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.875541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nswl\" (UniqueName: \"kubernetes.io/projected/a2126ba6-5874-4d63-98e6-1425898e8271-kube-api-access-9nswl\") pod \"a2126ba6-5874-4d63-98e6-1425898e8271\" (UID: \"a2126ba6-5874-4d63-98e6-1425898e8271\") " Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.881571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "febf775b-8c73-4ff1-99a0-ef53e4f20cd1" (UID: "febf775b-8c73-4ff1-99a0-ef53e4f20cd1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.883274 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-utilities" (OuterVolumeSpecName: "utilities") pod "35fabf16-e20c-44d3-aa61-d3e9b881ab4e" (UID: "35fabf16-e20c-44d3-aa61-d3e9b881ab4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.884669 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-utilities" (OuterVolumeSpecName: "utilities") pod "a59d35e7-c68d-4908-aa12-e587cf1a65ea" (UID: "a59d35e7-c68d-4908-aa12-e587cf1a65ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.885572 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2126ba6-5874-4d63-98e6-1425898e8271-kube-api-access-9nswl" (OuterVolumeSpecName: "kube-api-access-9nswl") pod "a2126ba6-5874-4d63-98e6-1425898e8271" (UID: "a2126ba6-5874-4d63-98e6-1425898e8271"). InnerVolumeSpecName "kube-api-access-9nswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.886463 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "febf775b-8c73-4ff1-99a0-ef53e4f20cd1" (UID: "febf775b-8c73-4ff1-99a0-ef53e4f20cd1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.888370 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59d35e7-c68d-4908-aa12-e587cf1a65ea-kube-api-access-plx6g" (OuterVolumeSpecName: "kube-api-access-plx6g") pod "a59d35e7-c68d-4908-aa12-e587cf1a65ea" (UID: "a59d35e7-c68d-4908-aa12-e587cf1a65ea"). InnerVolumeSpecName "kube-api-access-plx6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.894530 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-kube-api-access-rgzhg" (OuterVolumeSpecName: "kube-api-access-rgzhg") pod "febf775b-8c73-4ff1-99a0-ef53e4f20cd1" (UID: "febf775b-8c73-4ff1-99a0-ef53e4f20cd1"). InnerVolumeSpecName "kube-api-access-rgzhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.895095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-utilities" (OuterVolumeSpecName: "utilities") pod "a2126ba6-5874-4d63-98e6-1425898e8271" (UID: "a2126ba6-5874-4d63-98e6-1425898e8271"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.896237 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-kube-api-access-b778r" (OuterVolumeSpecName: "kube-api-access-b778r") pod "35fabf16-e20c-44d3-aa61-d3e9b881ab4e" (UID: "35fabf16-e20c-44d3-aa61-d3e9b881ab4e"). InnerVolumeSpecName "kube-api-access-b778r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.897114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2126ba6-5874-4d63-98e6-1425898e8271" (UID: "a2126ba6-5874-4d63-98e6-1425898e8271"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.944505 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35fabf16-e20c-44d3-aa61-d3e9b881ab4e" (UID: "35fabf16-e20c-44d3-aa61-d3e9b881ab4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.972081 4858 generic.go:334] "Generic (PLEG): container finished" podID="a2126ba6-5874-4d63-98e6-1425898e8271" containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" exitCode=0 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.972139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerDied","Data":"28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.972169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gllgw" event={"ID":"a2126ba6-5874-4d63-98e6-1425898e8271","Type":"ContainerDied","Data":"ed61fdc636e30f272e48dc0b1d5318409d989cc5e48ead5252490cfa3a434cac"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.972189 4858 scope.go:117] "RemoveContainer" containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.972387 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gllgw" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977270 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plx6g\" (UniqueName: \"kubernetes.io/projected/a59d35e7-c68d-4908-aa12-e587cf1a65ea-kube-api-access-plx6g\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977294 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977304 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgzhg\" (UniqueName: \"kubernetes.io/projected/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-kube-api-access-rgzhg\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977327 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977338 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977346 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977355 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nswl\" (UniqueName: \"kubernetes.io/projected/a2126ba6-5874-4d63-98e6-1425898e8271-kube-api-access-9nswl\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977363 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2126ba6-5874-4d63-98e6-1425898e8271-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977372 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b778r\" (UniqueName: \"kubernetes.io/projected/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-kube-api-access-b778r\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977381 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35fabf16-e20c-44d3-aa61-d3e9b881ab4e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.977389 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/febf775b-8c73-4ff1-99a0-ef53e4f20cd1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.980727 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerID="3d6dabea18744496bf745ce6a25f5ec8350ab84c3a782f9c69fd5104e7ade772" exitCode=0 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.980817 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerDied","Data":"3d6dabea18744496bf745ce6a25f5ec8350ab84c3a782f9c69fd5104e7ade772"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.984473 4858 generic.go:334] "Generic (PLEG): container finished" podID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerID="4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b" exitCode=0 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.984539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerDied","Data":"4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.984568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knj5m" event={"ID":"a59d35e7-c68d-4908-aa12-e587cf1a65ea","Type":"ContainerDied","Data":"5f122f42f892b6a3115c5677e4ec7d5a08661ef06a9ab2fbd4493d7c7eb582ba"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.984634 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knj5m" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.984998 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l8tgd"] Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.988197 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a59d35e7-c68d-4908-aa12-e587cf1a65ea" (UID: "a59d35e7-c68d-4908-aa12-e587cf1a65ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.990123 4858 generic.go:334] "Generic (PLEG): container finished" podID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerID="8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1" exitCode=0 Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.990200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerDied","Data":"8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.990230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpkvd" event={"ID":"35fabf16-e20c-44d3-aa61-d3e9b881ab4e","Type":"ContainerDied","Data":"1e27113cf9aad6b6d4671eb3273032ab4d7111465101e6a20b190dda677d1bfe"} Nov 22 07:18:42 crc kubenswrapper[4858]: I1122 07:18:42.990615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpkvd" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.000076 4858 generic.go:334] "Generic (PLEG): container finished" podID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerID="9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a" exitCode=0 Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.000131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" event={"ID":"febf775b-8c73-4ff1-99a0-ef53e4f20cd1","Type":"ContainerDied","Data":"9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a"} Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.000158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" event={"ID":"febf775b-8c73-4ff1-99a0-ef53e4f20cd1","Type":"ContainerDied","Data":"dcdd8a4593ed4399a007e729e070297c15f2d6cb216dc63740b58a33a4127c56"} Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.000168 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qbgwx" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.030137 4858 scope.go:117] "RemoveContainer" containerID="9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.037073 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllgw"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.043411 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gllgw"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.057590 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpkvd"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.059129 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpkvd"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.059729 4858 scope.go:117] "RemoveContainer" containerID="0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.064922 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qbgwx"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.071717 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qbgwx"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.078705 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59d35e7-c68d-4908-aa12-e587cf1a65ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.090667 4858 scope.go:117] "RemoveContainer" containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.091486 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305\": container with ID starting with 28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305 not found: ID does not exist" containerID="28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.091584 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305"} err="failed to get container status \"28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305\": rpc error: code = NotFound desc = could not find container \"28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305\": container with ID starting with 28855280cda095a64b3bf24b132bd00c884ae39fdc5cd5b169415372f1acd305 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.091610 4858 scope.go:117] "RemoveContainer" containerID="9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.092331 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078\": container with ID starting with 9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078 not found: ID does not exist" containerID="9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.092355 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078"} err="failed to get container status \"9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078\": rpc error: code = NotFound desc = could not find container \"9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078\": container with ID starting with 9b0521bdbec016fa1ca8663fc546d98a70cc6cb39a4da4089640cc7f83ce7078 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.092376 4858 scope.go:117] "RemoveContainer" containerID="0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.092723 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48\": container with ID starting with 0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48 not found: ID does not exist" containerID="0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.092745 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48"} err="failed to get container status \"0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48\": rpc error: code = NotFound desc = could not find container \"0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48\": container with ID starting with 0b7144798e64019a4f95e027c255c80032bc356841a0df8d1072ee11977cdd48 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.092759 4858 scope.go:117] "RemoveContainer" containerID="4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.114992 4858 scope.go:117] "RemoveContainer" containerID="72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.136599 4858 scope.go:117] "RemoveContainer" containerID="2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.159847 4858 scope.go:117] "RemoveContainer" containerID="4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.160563 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b\": container with ID starting with 4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b not found: ID does not exist" containerID="4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.160630 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b"} err="failed to get container status \"4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b\": rpc error: code = NotFound desc = could not find container \"4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b\": container with ID starting with 4e6e686159b5b369ff8d936165517d15ce7474ffce3dae14384e24682db2440b not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.160663 4858 scope.go:117] "RemoveContainer" containerID="72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.161064 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724\": container with ID starting with 72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724 not found: ID does not exist" containerID="72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.161105 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724"} err="failed to get container status \"72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724\": rpc error: code = NotFound desc = could not find container \"72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724\": container with ID starting with 72c7d47529e0f5d1d82bebf35282fcd2109c15dfe437d49fa12e24f67bac6724 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.161128 4858 scope.go:117] "RemoveContainer" containerID="2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.161527 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917\": container with ID starting with 2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917 not found: ID does not exist" containerID="2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.161550 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917"} err="failed to get container status \"2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917\": rpc error: code = NotFound desc = could not find container \"2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917\": container with ID starting with 2f9acdd897fd1968bb96ba6265a5b1fd6faa1f51ed7fc9dc5a16a6a5b1e4c917 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.161575 4858 scope.go:117] "RemoveContainer" containerID="8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.180772 4858 scope.go:117] "RemoveContainer" containerID="9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.203123 4858 scope.go:117] "RemoveContainer" containerID="14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.226633 4858 scope.go:117] "RemoveContainer" containerID="8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.227145 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1\": container with ID starting with 8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1 not found: ID does not exist" containerID="8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.227179 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1"} err="failed to get container status \"8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1\": rpc error: code = NotFound desc = could not find container \"8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1\": container with ID starting with 8c00a7d3c1d0c2cdcc429865a537166ab304184a043b66c46de65112d3aab8a1 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.227205 4858 scope.go:117] "RemoveContainer" containerID="9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.229029 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7\": container with ID starting with 9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7 not found: ID does not exist" containerID="9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.229061 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7"} err="failed to get container status \"9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7\": rpc error: code = NotFound desc = could not find container \"9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7\": container with ID starting with 9fb191bb7ebf02936ae73a9ab19337aa6e68956c5c9be0df024ea3b323f82ee7 not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.229107 4858 scope.go:117] "RemoveContainer" containerID="14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.230705 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c\": container with ID starting with 14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c not found: ID does not exist" containerID="14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.230761 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c"} err="failed to get container status \"14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c\": rpc error: code = NotFound desc = could not find container \"14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c\": container with ID starting with 14231691adabf1f7946c9d9b0df98725a7c7e0351f8fabe0cd38fc2553d2340c not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.230781 4858 scope.go:117] "RemoveContainer" containerID="9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.249966 4858 scope.go:117] "RemoveContainer" containerID="9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a" Nov 22 07:18:43 crc kubenswrapper[4858]: E1122 07:18:43.250771 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a\": container with ID starting with 9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a not found: ID does not exist" containerID="9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.251112 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a"} err="failed to get container status \"9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a\": rpc error: code = NotFound desc = could not find container \"9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a\": container with ID starting with 9b536548f0399304c4a65c14178f66eac523592d2957a38ad8c1df01ce605c6a not found: ID does not exist" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.305168 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.318754 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knj5m"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.321611 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-knj5m"] Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.483067 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-utilities\") pod \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.483123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-catalog-content\") pod \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.483184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wrcj\" (UniqueName: \"kubernetes.io/projected/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-kube-api-access-7wrcj\") pod \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\" (UID: \"7b566c9a-c894-4c64-8c08-9b4ff2f9d064\") " Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.484201 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-utilities" (OuterVolumeSpecName: "utilities") pod "7b566c9a-c894-4c64-8c08-9b4ff2f9d064" (UID: "7b566c9a-c894-4c64-8c08-9b4ff2f9d064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.489552 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-kube-api-access-7wrcj" (OuterVolumeSpecName: "kube-api-access-7wrcj") pod "7b566c9a-c894-4c64-8c08-9b4ff2f9d064" (UID: "7b566c9a-c894-4c64-8c08-9b4ff2f9d064"). InnerVolumeSpecName "kube-api-access-7wrcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.542460 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" path="/var/lib/kubelet/pods/35fabf16-e20c-44d3-aa61-d3e9b881ab4e/volumes" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.543184 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" path="/var/lib/kubelet/pods/a2126ba6-5874-4d63-98e6-1425898e8271/volumes" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.543946 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" path="/var/lib/kubelet/pods/a59d35e7-c68d-4908-aa12-e587cf1a65ea/volumes" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.543945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b566c9a-c894-4c64-8c08-9b4ff2f9d064" (UID: "7b566c9a-c894-4c64-8c08-9b4ff2f9d064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.545179 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" path="/var/lib/kubelet/pods/febf775b-8c73-4ff1-99a0-ef53e4f20cd1/volumes" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.584583 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.584620 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:43 crc kubenswrapper[4858]: I1122 07:18:43.584636 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wrcj\" (UniqueName: \"kubernetes.io/projected/7b566c9a-c894-4c64-8c08-9b4ff2f9d064-kube-api-access-7wrcj\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.008197 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gg4fx" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.008193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gg4fx" event={"ID":"7b566c9a-c894-4c64-8c08-9b4ff2f9d064","Type":"ContainerDied","Data":"7cd75c35540135e95e062e3be0dadf77e160d05239e47e550b96d30ddc88a979"} Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.008367 4858 scope.go:117] "RemoveContainer" containerID="3d6dabea18744496bf745ce6a25f5ec8350ab84c3a782f9c69fd5104e7ade772" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.014387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" event={"ID":"df9ea739-ad0f-419c-8e7a-ead3aebbe71f","Type":"ContainerStarted","Data":"0a8ff2d646b8668870f5594a6296fbfe340cefeafa5b37334baded8d7ce871b6"} Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.014434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" event={"ID":"df9ea739-ad0f-419c-8e7a-ead3aebbe71f","Type":"ContainerStarted","Data":"50f39ca0fc25d1a40a0951a0784588303aa719db2f39aebec6c66e8b12e26559"} Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.014551 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.016556 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.024673 4858 scope.go:117] "RemoveContainer" containerID="9f0c7913a39b74cda2cfa4302b2dc91e2feb6ee57ab4ba4e97b891f324b3988a" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.031009 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gg4fx"] Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.036246 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gg4fx"] Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.045726 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l8tgd" podStartSLOduration=2.045696014 podStartE2EDuration="2.045696014s" podCreationTimestamp="2025-11-22 07:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:18:44.043526034 +0000 UTC m=+485.884949040" watchObservedRunningTime="2025-11-22 07:18:44.045696014 +0000 UTC m=+485.887119020" Nov 22 07:18:44 crc kubenswrapper[4858]: I1122 07:18:44.063498 4858 scope.go:117] "RemoveContainer" containerID="ecea4307f448d9abf261c8ac0346ae435a78568a4b11b159247c8a0d0af25576" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.053494 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rwt9j"] Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055283 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055312 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055335 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055342 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055354 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055362 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055371 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055377 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055384 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055389 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055398 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055403 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055410 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055424 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055429 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055435 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerName="marketplace-operator" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055441 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerName="marketplace-operator" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055449 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055455 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055471 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="extract-utilities" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055479 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055486 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="extract-content" Nov 22 07:18:45 crc kubenswrapper[4858]: E1122 07:18:45.055495 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055502 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055604 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2126ba6-5874-4d63-98e6-1425898e8271" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055617 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59d35e7-c68d-4908-aa12-e587cf1a65ea" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055625 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055634 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="35fabf16-e20c-44d3-aa61-d3e9b881ab4e" containerName="registry-server" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.055644 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="febf775b-8c73-4ff1-99a0-ef53e4f20cd1" containerName="marketplace-operator" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.057851 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.060558 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.071250 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwt9j"] Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.203807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm66h\" (UniqueName: \"kubernetes.io/projected/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-kube-api-access-zm66h\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.203896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-catalog-content\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.203970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-utilities\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.305253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-catalog-content\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.304848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-catalog-content\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.305460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-utilities\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.305511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm66h\" (UniqueName: \"kubernetes.io/projected/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-kube-api-access-zm66h\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.305952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-utilities\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.312729 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.312787 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.312843 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.313431 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cd00a3097c2d15fb4beb8499bb46da6d8a7f79af5f46ffb8eec499c9122cc18"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.313524 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://0cd00a3097c2d15fb4beb8499bb46da6d8a7f79af5f46ffb8eec499c9122cc18" gracePeriod=600 Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.325098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm66h\" (UniqueName: \"kubernetes.io/projected/92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38-kube-api-access-zm66h\") pod \"redhat-marketplace-rwt9j\" (UID: \"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38\") " pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.385941 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.551912 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b566c9a-c894-4c64-8c08-9b4ff2f9d064" path="/var/lib/kubelet/pods/7b566c9a-c894-4c64-8c08-9b4ff2f9d064/volumes" Nov 22 07:18:45 crc kubenswrapper[4858]: I1122 07:18:45.583268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwt9j"] Nov 22 07:18:45 crc kubenswrapper[4858]: W1122 07:18:45.592911 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92fcd18c_b4b0_4c21_bcd5_b9f4fdcadd38.slice/crio-2b68ad0c0b0ccda5c5d2d4a63d501eb7efbb262d91df458076491937fb25a365 WatchSource:0}: Error finding container 2b68ad0c0b0ccda5c5d2d4a63d501eb7efbb262d91df458076491937fb25a365: Status 404 returned error can't find the container with id 2b68ad0c0b0ccda5c5d2d4a63d501eb7efbb262d91df458076491937fb25a365 Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.028867 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="0cd00a3097c2d15fb4beb8499bb46da6d8a7f79af5f46ffb8eec499c9122cc18" exitCode=0 Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.028934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"0cd00a3097c2d15fb4beb8499bb46da6d8a7f79af5f46ffb8eec499c9122cc18"} Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.029294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"b42591c29d97789277a976ab9c2b059ee8eaa00c2fc3283207a5ea3642f045d5"} Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.029368 4858 scope.go:117] "RemoveContainer" containerID="38b64ede7250124b64072047b4e554cbabc9afa040a562afc2f4cc25ff4953cb" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.033418 4858 generic.go:334] "Generic (PLEG): container finished" podID="92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38" containerID="d326d24eb7179286a5e5066ffafe0dfca3ac4247bc1dcbcf1420e57e8e6c9935" exitCode=0 Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.033534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwt9j" event={"ID":"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38","Type":"ContainerDied","Data":"d326d24eb7179286a5e5066ffafe0dfca3ac4247bc1dcbcf1420e57e8e6c9935"} Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.033603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwt9j" event={"ID":"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38","Type":"ContainerStarted","Data":"2b68ad0c0b0ccda5c5d2d4a63d501eb7efbb262d91df458076491937fb25a365"} Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.034923 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.056952 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlfqj"] Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.058100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.061022 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.061666 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlfqj"] Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.217187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5083b088-27e9-4ac4-b0bd-a1d5675891c0-utilities\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.217395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5083b088-27e9-4ac4-b0bd-a1d5675891c0-catalog-content\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.217424 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbg9\" (UniqueName: \"kubernetes.io/projected/5083b088-27e9-4ac4-b0bd-a1d5675891c0-kube-api-access-dvbg9\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.318876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5083b088-27e9-4ac4-b0bd-a1d5675891c0-utilities\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.318959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5083b088-27e9-4ac4-b0bd-a1d5675891c0-catalog-content\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.318980 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbg9\" (UniqueName: \"kubernetes.io/projected/5083b088-27e9-4ac4-b0bd-a1d5675891c0-kube-api-access-dvbg9\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.319517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5083b088-27e9-4ac4-b0bd-a1d5675891c0-catalog-content\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.319583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5083b088-27e9-4ac4-b0bd-a1d5675891c0-utilities\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.338976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbg9\" (UniqueName: \"kubernetes.io/projected/5083b088-27e9-4ac4-b0bd-a1d5675891c0-kube-api-access-dvbg9\") pod \"redhat-operators-nlfqj\" (UID: \"5083b088-27e9-4ac4-b0bd-a1d5675891c0\") " pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.381957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:46 crc kubenswrapper[4858]: I1122 07:18:46.564511 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlfqj"] Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.040172 4858 generic.go:334] "Generic (PLEG): container finished" podID="92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38" containerID="5666f716bd2242d85b2fc7bf4f0ec17e9a51a8c77cb2797e3c69553395e0017e" exitCode=0 Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.040222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwt9j" event={"ID":"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38","Type":"ContainerDied","Data":"5666f716bd2242d85b2fc7bf4f0ec17e9a51a8c77cb2797e3c69553395e0017e"} Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.051244 4858 generic.go:334] "Generic (PLEG): container finished" podID="5083b088-27e9-4ac4-b0bd-a1d5675891c0" containerID="a3a66281953250bcf416cc02e93304a7592519190d0e0f83f6ec0ee0277fd9a2" exitCode=0 Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.051289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlfqj" event={"ID":"5083b088-27e9-4ac4-b0bd-a1d5675891c0","Type":"ContainerDied","Data":"a3a66281953250bcf416cc02e93304a7592519190d0e0f83f6ec0ee0277fd9a2"} Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.051335 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlfqj" event={"ID":"5083b088-27e9-4ac4-b0bd-a1d5675891c0","Type":"ContainerStarted","Data":"fb68b4ea6f8eb383f46b7c22ddd2c6eab3cac29425fa88907916546342a18334"} Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.452555 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cqsnt"] Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.454926 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.463641 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.468152 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqsnt"] Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.536358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnnwk\" (UniqueName: \"kubernetes.io/projected/2ed98ca3-48f8-4737-8954-fac4bea34ad1-kube-api-access-qnnwk\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.536417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-catalog-content\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.536467 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-utilities\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.637984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnnwk\" (UniqueName: \"kubernetes.io/projected/2ed98ca3-48f8-4737-8954-fac4bea34ad1-kube-api-access-qnnwk\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.638049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-catalog-content\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.638100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-utilities\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.638909 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-utilities\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.640540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-catalog-content\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.658424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnnwk\" (UniqueName: \"kubernetes.io/projected/2ed98ca3-48f8-4737-8954-fac4bea34ad1-kube-api-access-qnnwk\") pod \"community-operators-cqsnt\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:47 crc kubenswrapper[4858]: I1122 07:18:47.779165 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.058726 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlfqj" event={"ID":"5083b088-27e9-4ac4-b0bd-a1d5675891c0","Type":"ContainerStarted","Data":"769141c0b53b62debca1674e64f8950ed30322fb2d525a648410f2e79686708c"} Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.060709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwt9j" event={"ID":"92fcd18c-b4b0-4c21-bcd5-b9f4fdcadd38","Type":"ContainerStarted","Data":"75c3758d7942c93e2eeb0e50b18878c0f7444272a441123cfc45ea1392b94168"} Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.105591 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rwt9j" podStartSLOduration=1.673953891 podStartE2EDuration="3.105573773s" podCreationTimestamp="2025-11-22 07:18:45 +0000 UTC" firstStartedPulling="2025-11-22 07:18:46.034723681 +0000 UTC m=+487.876146687" lastFinishedPulling="2025-11-22 07:18:47.466343563 +0000 UTC m=+489.307766569" observedRunningTime="2025-11-22 07:18:48.104578781 +0000 UTC m=+489.946001817" watchObservedRunningTime="2025-11-22 07:18:48.105573773 +0000 UTC m=+489.946996779" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.193923 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqsnt"] Nov 22 07:18:48 crc kubenswrapper[4858]: W1122 07:18:48.198637 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ed98ca3_48f8_4737_8954_fac4bea34ad1.slice/crio-3c6b6ff745e19a83036f437b3df27497ae291294f3fa432686eba6722d9ce8d8 WatchSource:0}: Error finding container 3c6b6ff745e19a83036f437b3df27497ae291294f3fa432686eba6722d9ce8d8: Status 404 returned error can't find the container with id 3c6b6ff745e19a83036f437b3df27497ae291294f3fa432686eba6722d9ce8d8 Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.450899 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsfk6"] Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.452027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.453943 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.467059 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsfk6"] Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.651479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8cm\" (UniqueName: \"kubernetes.io/projected/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-kube-api-access-sx8cm\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.651867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-catalog-content\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.651921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-utilities\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.755451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8cm\" (UniqueName: \"kubernetes.io/projected/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-kube-api-access-sx8cm\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.755505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-catalog-content\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.755534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-utilities\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.755967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-utilities\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.756204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-catalog-content\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:48 crc kubenswrapper[4858]: I1122 07:18:48.786648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8cm\" (UniqueName: \"kubernetes.io/projected/93df6386-eaf3-4ba8-9d01-8b1819b0eb06-kube-api-access-sx8cm\") pod \"certified-operators-bsfk6\" (UID: \"93df6386-eaf3-4ba8-9d01-8b1819b0eb06\") " pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.066760 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerID="e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad" exitCode=0 Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.066823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqsnt" event={"ID":"2ed98ca3-48f8-4737-8954-fac4bea34ad1","Type":"ContainerDied","Data":"e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad"} Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.066853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqsnt" event={"ID":"2ed98ca3-48f8-4737-8954-fac4bea34ad1","Type":"ContainerStarted","Data":"3c6b6ff745e19a83036f437b3df27497ae291294f3fa432686eba6722d9ce8d8"} Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.069902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.070029 4858 generic.go:334] "Generic (PLEG): container finished" podID="5083b088-27e9-4ac4-b0bd-a1d5675891c0" containerID="769141c0b53b62debca1674e64f8950ed30322fb2d525a648410f2e79686708c" exitCode=0 Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.070207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlfqj" event={"ID":"5083b088-27e9-4ac4-b0bd-a1d5675891c0","Type":"ContainerDied","Data":"769141c0b53b62debca1674e64f8950ed30322fb2d525a648410f2e79686708c"} Nov 22 07:18:49 crc kubenswrapper[4858]: I1122 07:18:49.254998 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsfk6"] Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.084541 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerID="841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835" exitCode=0 Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.084632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqsnt" event={"ID":"2ed98ca3-48f8-4737-8954-fac4bea34ad1","Type":"ContainerDied","Data":"841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835"} Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.104302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlfqj" event={"ID":"5083b088-27e9-4ac4-b0bd-a1d5675891c0","Type":"ContainerStarted","Data":"354f55532685e2ba351349c685c134f61c9305a8129adcad18d6b1cc4c95303c"} Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.107060 4858 generic.go:334] "Generic (PLEG): container finished" podID="93df6386-eaf3-4ba8-9d01-8b1819b0eb06" containerID="b835b177fddcf9d5301d24628d1325f929654cafe923e99a12164a22f4531b25" exitCode=0 Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.107096 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsfk6" event={"ID":"93df6386-eaf3-4ba8-9d01-8b1819b0eb06","Type":"ContainerDied","Data":"b835b177fddcf9d5301d24628d1325f929654cafe923e99a12164a22f4531b25"} Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.107114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsfk6" event={"ID":"93df6386-eaf3-4ba8-9d01-8b1819b0eb06","Type":"ContainerStarted","Data":"a3667c1bc570759775457bb10729a9993e157c2d00162a7b572d839f4e808fb6"} Nov 22 07:18:50 crc kubenswrapper[4858]: I1122 07:18:50.125532 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlfqj" podStartSLOduration=1.58996052 podStartE2EDuration="4.125517853s" podCreationTimestamp="2025-11-22 07:18:46 +0000 UTC" firstStartedPulling="2025-11-22 07:18:47.05233396 +0000 UTC m=+488.893756966" lastFinishedPulling="2025-11-22 07:18:49.587891293 +0000 UTC m=+491.429314299" observedRunningTime="2025-11-22 07:18:50.122245437 +0000 UTC m=+491.963668463" watchObservedRunningTime="2025-11-22 07:18:50.125517853 +0000 UTC m=+491.966940859" Nov 22 07:18:51 crc kubenswrapper[4858]: I1122 07:18:51.117101 4858 generic.go:334] "Generic (PLEG): container finished" podID="93df6386-eaf3-4ba8-9d01-8b1819b0eb06" containerID="852db7590497e78bb44d78392b3f1c15a9b88a69337e6ef373d760e278140ffe" exitCode=0 Nov 22 07:18:51 crc kubenswrapper[4858]: I1122 07:18:51.117193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsfk6" event={"ID":"93df6386-eaf3-4ba8-9d01-8b1819b0eb06","Type":"ContainerDied","Data":"852db7590497e78bb44d78392b3f1c15a9b88a69337e6ef373d760e278140ffe"} Nov 22 07:18:51 crc kubenswrapper[4858]: I1122 07:18:51.128476 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqsnt" event={"ID":"2ed98ca3-48f8-4737-8954-fac4bea34ad1","Type":"ContainerStarted","Data":"efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49"} Nov 22 07:18:51 crc kubenswrapper[4858]: I1122 07:18:51.160255 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cqsnt" podStartSLOduration=2.728003858 podStartE2EDuration="4.16023339s" podCreationTimestamp="2025-11-22 07:18:47 +0000 UTC" firstStartedPulling="2025-11-22 07:18:49.068789107 +0000 UTC m=+490.910212113" lastFinishedPulling="2025-11-22 07:18:50.501018639 +0000 UTC m=+492.342441645" observedRunningTime="2025-11-22 07:18:51.156457219 +0000 UTC m=+492.997880235" watchObservedRunningTime="2025-11-22 07:18:51.16023339 +0000 UTC m=+493.001656406" Nov 22 07:18:52 crc kubenswrapper[4858]: I1122 07:18:52.144566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsfk6" event={"ID":"93df6386-eaf3-4ba8-9d01-8b1819b0eb06","Type":"ContainerStarted","Data":"73fd32b538b218ed95ad57f207b618cd01c9654b518e8b856a5539b2194540cd"} Nov 22 07:18:55 crc kubenswrapper[4858]: I1122 07:18:55.386807 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:55 crc kubenswrapper[4858]: I1122 07:18:55.387312 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:55 crc kubenswrapper[4858]: I1122 07:18:55.438721 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:55 crc kubenswrapper[4858]: I1122 07:18:55.460869 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsfk6" podStartSLOduration=5.983275827 podStartE2EDuration="7.460848034s" podCreationTimestamp="2025-11-22 07:18:48 +0000 UTC" firstStartedPulling="2025-11-22 07:18:50.108335641 +0000 UTC m=+491.949758647" lastFinishedPulling="2025-11-22 07:18:51.585907848 +0000 UTC m=+493.427330854" observedRunningTime="2025-11-22 07:18:52.164124319 +0000 UTC m=+494.005547325" watchObservedRunningTime="2025-11-22 07:18:55.460848034 +0000 UTC m=+497.302271040" Nov 22 07:18:56 crc kubenswrapper[4858]: I1122 07:18:56.199484 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rwt9j" Nov 22 07:18:56 crc kubenswrapper[4858]: I1122 07:18:56.382955 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:56 crc kubenswrapper[4858]: I1122 07:18:56.383261 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:56 crc kubenswrapper[4858]: I1122 07:18:56.425582 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:57 crc kubenswrapper[4858]: I1122 07:18:57.207435 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nlfqj" Nov 22 07:18:57 crc kubenswrapper[4858]: I1122 07:18:57.779681 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:57 crc kubenswrapper[4858]: I1122 07:18:57.779969 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:57 crc kubenswrapper[4858]: I1122 07:18:57.818469 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:58 crc kubenswrapper[4858]: I1122 07:18:58.209004 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:18:59 crc kubenswrapper[4858]: I1122 07:18:59.071076 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:59 crc kubenswrapper[4858]: I1122 07:18:59.071719 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:59 crc kubenswrapper[4858]: I1122 07:18:59.111371 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:18:59 crc kubenswrapper[4858]: I1122 07:18:59.229259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsfk6" Nov 22 07:20:45 crc kubenswrapper[4858]: I1122 07:20:45.312624 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:20:45 crc kubenswrapper[4858]: I1122 07:20:45.313148 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:21:15 crc kubenswrapper[4858]: I1122 07:21:15.311892 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:21:15 crc kubenswrapper[4858]: I1122 07:21:15.312506 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:21:45 crc kubenswrapper[4858]: I1122 07:21:45.312418 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:21:45 crc kubenswrapper[4858]: I1122 07:21:45.313062 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:21:45 crc kubenswrapper[4858]: I1122 07:21:45.313119 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:21:45 crc kubenswrapper[4858]: I1122 07:21:45.313858 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b42591c29d97789277a976ab9c2b059ee8eaa00c2fc3283207a5ea3642f045d5"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:21:45 crc kubenswrapper[4858]: I1122 07:21:45.313924 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://b42591c29d97789277a976ab9c2b059ee8eaa00c2fc3283207a5ea3642f045d5" gracePeriod=600 Nov 22 07:21:46 crc kubenswrapper[4858]: I1122 07:21:46.081091 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="b42591c29d97789277a976ab9c2b059ee8eaa00c2fc3283207a5ea3642f045d5" exitCode=0 Nov 22 07:21:46 crc kubenswrapper[4858]: I1122 07:21:46.081156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"b42591c29d97789277a976ab9c2b059ee8eaa00c2fc3283207a5ea3642f045d5"} Nov 22 07:21:46 crc kubenswrapper[4858]: I1122 07:21:46.081497 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"0b65c8333ff002bee1634062539fd2ee400b1e60a8967a4f14f466c0ca9ea940"} Nov 22 07:21:46 crc kubenswrapper[4858]: I1122 07:21:46.081524 4858 scope.go:117] "RemoveContainer" containerID="0cd00a3097c2d15fb4beb8499bb46da6d8a7f79af5f46ffb8eec499c9122cc18" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.183195 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p2zp7"] Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.184312 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.210848 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p2zp7"] Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.294987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdhs\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-kube-api-access-9zdhs\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fd579a13-1de7-460f-a65e-2f33a6b0fe56-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fd579a13-1de7-460f-a65e-2f33a6b0fe56-registry-certificates\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fd579a13-1de7-460f-a65e-2f33a6b0fe56-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd579a13-1de7-460f-a65e-2f33a6b0fe56-trusted-ca\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-registry-tls\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.295418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-bound-sa-token\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.317203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.396937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fd579a13-1de7-460f-a65e-2f33a6b0fe56-registry-certificates\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.396998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fd579a13-1de7-460f-a65e-2f33a6b0fe56-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.397028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd579a13-1de7-460f-a65e-2f33a6b0fe56-trusted-ca\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.397058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-registry-tls\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.397075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-bound-sa-token\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.397106 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zdhs\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-kube-api-access-9zdhs\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.397126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fd579a13-1de7-460f-a65e-2f33a6b0fe56-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.398204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fd579a13-1de7-460f-a65e-2f33a6b0fe56-ca-trust-extracted\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.398357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd579a13-1de7-460f-a65e-2f33a6b0fe56-trusted-ca\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.398512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fd579a13-1de7-460f-a65e-2f33a6b0fe56-registry-certificates\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.406081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fd579a13-1de7-460f-a65e-2f33a6b0fe56-installation-pull-secrets\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.406305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-registry-tls\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.412216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-bound-sa-token\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.414374 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zdhs\" (UniqueName: \"kubernetes.io/projected/fd579a13-1de7-460f-a65e-2f33a6b0fe56-kube-api-access-9zdhs\") pod \"image-registry-66df7c8f76-p2zp7\" (UID: \"fd579a13-1de7-460f-a65e-2f33a6b0fe56\") " pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.501141 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:29 crc kubenswrapper[4858]: I1122 07:22:29.668625 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-p2zp7"] Nov 22 07:22:30 crc kubenswrapper[4858]: I1122 07:22:30.297261 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" event={"ID":"fd579a13-1de7-460f-a65e-2f33a6b0fe56","Type":"ContainerStarted","Data":"85c35c3eff97f96e8b01a45b9fa768f2818c9867627d96ac2d961f9664e430f6"} Nov 22 07:22:31 crc kubenswrapper[4858]: I1122 07:22:31.312522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" event={"ID":"fd579a13-1de7-460f-a65e-2f33a6b0fe56","Type":"ContainerStarted","Data":"27f4445cac95b1f3cb025d04d1c8eaf6b62a14bae7479500a39f961b7e7eb16b"} Nov 22 07:22:31 crc kubenswrapper[4858]: I1122 07:22:31.312928 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:31 crc kubenswrapper[4858]: I1122 07:22:31.332607 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" podStartSLOduration=2.332586769 podStartE2EDuration="2.332586769s" podCreationTimestamp="2025-11-22 07:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:22:31.329207811 +0000 UTC m=+713.170630837" watchObservedRunningTime="2025-11-22 07:22:31.332586769 +0000 UTC m=+713.174009775" Nov 22 07:22:49 crc kubenswrapper[4858]: I1122 07:22:49.506133 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-p2zp7" Nov 22 07:22:49 crc kubenswrapper[4858]: I1122 07:22:49.556451 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vrgkv"] Nov 22 07:23:14 crc kubenswrapper[4858]: I1122 07:23:14.595891 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" podUID="022ff96d-cffc-425d-8bce-d26d9ce573d3" containerName="registry" containerID="cri-o://dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f" gracePeriod=30 Nov 22 07:23:14 crc kubenswrapper[4858]: I1122 07:23:14.930242 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.044457 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk7bj\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-kube-api-access-fk7bj\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.045514 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.045901 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-certificates\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.045938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-bound-sa-token\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.045970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/022ff96d-cffc-425d-8bce-d26d9ce573d3-installation-pull-secrets\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.046026 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-tls\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.046055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-trusted-ca\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.046137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/022ff96d-cffc-425d-8bce-d26d9ce573d3-ca-trust-extracted\") pod \"022ff96d-cffc-425d-8bce-d26d9ce573d3\" (UID: \"022ff96d-cffc-425d-8bce-d26d9ce573d3\") " Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.047211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.047295 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.050907 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-kube-api-access-fk7bj" (OuterVolumeSpecName: "kube-api-access-fk7bj") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "kube-api-access-fk7bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.051382 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.051605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.057941 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022ff96d-cffc-425d-8bce-d26d9ce573d3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.058299 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.064928 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022ff96d-cffc-425d-8bce-d26d9ce573d3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "022ff96d-cffc-425d-8bce-d26d9ce573d3" (UID: "022ff96d-cffc-425d-8bce-d26d9ce573d3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148012 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/022ff96d-cffc-425d-8bce-d26d9ce573d3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148050 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk7bj\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-kube-api-access-fk7bj\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148064 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148073 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/022ff96d-cffc-425d-8bce-d26d9ce573d3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148082 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148090 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/022ff96d-cffc-425d-8bce-d26d9ce573d3-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.148097 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/022ff96d-cffc-425d-8bce-d26d9ce573d3-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.576491 4858 generic.go:334] "Generic (PLEG): container finished" podID="022ff96d-cffc-425d-8bce-d26d9ce573d3" containerID="dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f" exitCode=0 Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.576541 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.576540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" event={"ID":"022ff96d-cffc-425d-8bce-d26d9ce573d3","Type":"ContainerDied","Data":"dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f"} Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.576569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vrgkv" event={"ID":"022ff96d-cffc-425d-8bce-d26d9ce573d3","Type":"ContainerDied","Data":"87739e949dbd79aff91838d4222bc6a12eaa44a6293216281e6dc59bd17931fb"} Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.576587 4858 scope.go:117] "RemoveContainer" containerID="dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.596188 4858 scope.go:117] "RemoveContainer" containerID="dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f" Nov 22 07:23:15 crc kubenswrapper[4858]: E1122 07:23:15.597386 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f\": container with ID starting with dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f not found: ID does not exist" containerID="dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.597431 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f"} err="failed to get container status \"dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f\": rpc error: code = NotFound desc = could not find container \"dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f\": container with ID starting with dd8e02d7cebb9f5ee90d5736b6c23aeb2f8f2744ef370210dbe725a36eb90a5f not found: ID does not exist" Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.598238 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vrgkv"] Nov 22 07:23:15 crc kubenswrapper[4858]: I1122 07:23:15.601515 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vrgkv"] Nov 22 07:23:17 crc kubenswrapper[4858]: I1122 07:23:17.541471 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022ff96d-cffc-425d-8bce-d26d9ce573d3" path="/var/lib/kubelet/pods/022ff96d-cffc-425d-8bce-d26d9ce573d3/volumes" Nov 22 07:23:27 crc kubenswrapper[4858]: I1122 07:23:27.810292 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ktsq"] Nov 22 07:23:27 crc kubenswrapper[4858]: I1122 07:23:27.811151 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" podUID="efcd6f6a-7dd3-426d-9e27-b991c98b47a4" containerName="controller-manager" containerID="cri-o://43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba" gracePeriod=30 Nov 22 07:23:27 crc kubenswrapper[4858]: I1122 07:23:27.905040 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z"] Nov 22 07:23:27 crc kubenswrapper[4858]: I1122 07:23:27.905294 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerName="route-controller-manager" containerID="cri-o://a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3" gracePeriod=30 Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.186765 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.268125 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.321759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn462\" (UniqueName: \"kubernetes.io/projected/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-kube-api-access-wn462\") pod \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.321880 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-client-ca\") pod \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.321907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-config\") pod \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.321934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-proxy-ca-bundles\") pod \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.321960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-serving-cert\") pod \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\" (UID: \"efcd6f6a-7dd3-426d-9e27-b991c98b47a4\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.322643 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "efcd6f6a-7dd3-426d-9e27-b991c98b47a4" (UID: "efcd6f6a-7dd3-426d-9e27-b991c98b47a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.322719 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "efcd6f6a-7dd3-426d-9e27-b991c98b47a4" (UID: "efcd6f6a-7dd3-426d-9e27-b991c98b47a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.322729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-config" (OuterVolumeSpecName: "config") pod "efcd6f6a-7dd3-426d-9e27-b991c98b47a4" (UID: "efcd6f6a-7dd3-426d-9e27-b991c98b47a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.323069 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.323087 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.323095 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.328001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-kube-api-access-wn462" (OuterVolumeSpecName: "kube-api-access-wn462") pod "efcd6f6a-7dd3-426d-9e27-b991c98b47a4" (UID: "efcd6f6a-7dd3-426d-9e27-b991c98b47a4"). InnerVolumeSpecName "kube-api-access-wn462". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.328146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "efcd6f6a-7dd3-426d-9e27-b991c98b47a4" (UID: "efcd6f6a-7dd3-426d-9e27-b991c98b47a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.423888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-client-ca\") pod \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.423960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-serving-cert\") pod \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.424019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-config\") pod \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.424075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs5jn\" (UniqueName: \"kubernetes.io/projected/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-kube-api-access-xs5jn\") pod \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\" (UID: \"55aeedcc-4db9-4ba7-87d3-bae650dc8af0\") " Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.424467 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.424492 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn462\" (UniqueName: \"kubernetes.io/projected/efcd6f6a-7dd3-426d-9e27-b991c98b47a4-kube-api-access-wn462\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.425742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-config" (OuterVolumeSpecName: "config") pod "55aeedcc-4db9-4ba7-87d3-bae650dc8af0" (UID: "55aeedcc-4db9-4ba7-87d3-bae650dc8af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.425988 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-client-ca" (OuterVolumeSpecName: "client-ca") pod "55aeedcc-4db9-4ba7-87d3-bae650dc8af0" (UID: "55aeedcc-4db9-4ba7-87d3-bae650dc8af0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.428891 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-kube-api-access-xs5jn" (OuterVolumeSpecName: "kube-api-access-xs5jn") pod "55aeedcc-4db9-4ba7-87d3-bae650dc8af0" (UID: "55aeedcc-4db9-4ba7-87d3-bae650dc8af0"). InnerVolumeSpecName "kube-api-access-xs5jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.429909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55aeedcc-4db9-4ba7-87d3-bae650dc8af0" (UID: "55aeedcc-4db9-4ba7-87d3-bae650dc8af0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.525907 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs5jn\" (UniqueName: \"kubernetes.io/projected/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-kube-api-access-xs5jn\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.525969 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.526010 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.526023 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aeedcc-4db9-4ba7-87d3-bae650dc8af0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.653371 4858 generic.go:334] "Generic (PLEG): container finished" podID="efcd6f6a-7dd3-426d-9e27-b991c98b47a4" containerID="43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba" exitCode=0 Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.653502 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.653524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" event={"ID":"efcd6f6a-7dd3-426d-9e27-b991c98b47a4","Type":"ContainerDied","Data":"43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba"} Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.653586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5ktsq" event={"ID":"efcd6f6a-7dd3-426d-9e27-b991c98b47a4","Type":"ContainerDied","Data":"d2ffe1db14a410f647f1f9c8b9762f235669bdc937b92505acb723ae4b1c2325"} Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.653612 4858 scope.go:117] "RemoveContainer" containerID="43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.655760 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.655740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" event={"ID":"55aeedcc-4db9-4ba7-87d3-bae650dc8af0","Type":"ContainerDied","Data":"a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3"} Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.655598 4858 generic.go:334] "Generic (PLEG): container finished" podID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerID="a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3" exitCode=0 Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.655937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z" event={"ID":"55aeedcc-4db9-4ba7-87d3-bae650dc8af0","Type":"ContainerDied","Data":"2f9b7c7a369be9dcd1f1efff18877a4dac6ac6a375f59f0606c3afca066d1364"} Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.672876 4858 scope.go:117] "RemoveContainer" containerID="43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba" Nov 22 07:23:28 crc kubenswrapper[4858]: E1122 07:23:28.675419 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba\": container with ID starting with 43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba not found: ID does not exist" containerID="43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.675471 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba"} err="failed to get container status \"43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba\": rpc error: code = NotFound desc = could not find container \"43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba\": container with ID starting with 43f71d5950e9f9dfd8df45a906057b3c4e59e767ed7c5334b8eecda631255bba not found: ID does not exist" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.675503 4858 scope.go:117] "RemoveContainer" containerID="a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.690767 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ktsq"] Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.695636 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5ktsq"] Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.699424 4858 scope.go:117] "RemoveContainer" containerID="a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.699579 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z"] Nov 22 07:23:28 crc kubenswrapper[4858]: E1122 07:23:28.699908 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3\": container with ID starting with a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3 not found: ID does not exist" containerID="a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.699937 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3"} err="failed to get container status \"a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3\": rpc error: code = NotFound desc = could not find container \"a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3\": container with ID starting with a689843965511843f0d9fb634e545e18812d4bc308cc861b2de9a4052da634c3 not found: ID does not exist" Nov 22 07:23:28 crc kubenswrapper[4858]: I1122 07:23:28.702118 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m8h8z"] Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.023977 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4"] Nov 22 07:23:29 crc kubenswrapper[4858]: E1122 07:23:29.024199 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022ff96d-cffc-425d-8bce-d26d9ce573d3" containerName="registry" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024216 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="022ff96d-cffc-425d-8bce-d26d9ce573d3" containerName="registry" Nov 22 07:23:29 crc kubenswrapper[4858]: E1122 07:23:29.024234 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efcd6f6a-7dd3-426d-9e27-b991c98b47a4" containerName="controller-manager" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024243 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="efcd6f6a-7dd3-426d-9e27-b991c98b47a4" containerName="controller-manager" Nov 22 07:23:29 crc kubenswrapper[4858]: E1122 07:23:29.024254 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerName="route-controller-manager" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024262 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerName="route-controller-manager" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024392 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" containerName="route-controller-manager" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024403 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="efcd6f6a-7dd3-426d-9e27-b991c98b47a4" containerName="controller-manager" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024420 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="022ff96d-cffc-425d-8bce-d26d9ce573d3" containerName="registry" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.024894 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.026987 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.027039 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5"] Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.027544 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.027607 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.027745 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.028255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.028456 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.028527 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.030958 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.031053 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.031419 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.031749 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.031868 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.033528 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.036896 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5"] Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.039458 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.042879 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4"] Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.133989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3570f709-2b7b-43f0-9f27-06df057c22d8-serving-cert\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134035 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgmm2\" (UniqueName: \"kubernetes.io/projected/3570f709-2b7b-43f0-9f27-06df057c22d8-kube-api-access-tgmm2\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qs9\" (UniqueName: \"kubernetes.io/projected/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-kube-api-access-z8qs9\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-config\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-serving-cert\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-config\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134245 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-client-ca\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-client-ca\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.134665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-proxy-ca-bundles\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.170414 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4"] Nov 22 07:23:29 crc kubenswrapper[4858]: E1122 07:23:29.170874 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-tgmm2 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" podUID="3570f709-2b7b-43f0-9f27-06df057c22d8" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.235896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-client-ca\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.235944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-client-ca\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.235963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-proxy-ca-bundles\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.235985 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3570f709-2b7b-43f0-9f27-06df057c22d8-serving-cert\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.236006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgmm2\" (UniqueName: \"kubernetes.io/projected/3570f709-2b7b-43f0-9f27-06df057c22d8-kube-api-access-tgmm2\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.236026 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8qs9\" (UniqueName: \"kubernetes.io/projected/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-kube-api-access-z8qs9\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.236052 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-config\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.236073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-serving-cert\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.236098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-config\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.236889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-client-ca\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.237106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-config\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.237809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-client-ca\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.238059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-config\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.238102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-proxy-ca-bundles\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.239775 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3570f709-2b7b-43f0-9f27-06df057c22d8-serving-cert\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.240550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-serving-cert\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.251167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgmm2\" (UniqueName: \"kubernetes.io/projected/3570f709-2b7b-43f0-9f27-06df057c22d8-kube-api-access-tgmm2\") pod \"route-controller-manager-8bbb49b96-gx5r4\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.251219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8qs9\" (UniqueName: \"kubernetes.io/projected/0f22692d-f6a4-4b5b-8bde-65457a2c6dcd-kube-api-access-z8qs9\") pod \"controller-manager-cfbdfdf58-fjcx5\" (UID: \"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd\") " pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.357633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.546889 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55aeedcc-4db9-4ba7-87d3-bae650dc8af0" path="/var/lib/kubelet/pods/55aeedcc-4db9-4ba7-87d3-bae650dc8af0/volumes" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.547904 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efcd6f6a-7dd3-426d-9e27-b991c98b47a4" path="/var/lib/kubelet/pods/efcd6f6a-7dd3-426d-9e27-b991c98b47a4/volumes" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.559497 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5"] Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.662348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" event={"ID":"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd","Type":"ContainerStarted","Data":"68a05bb0c58a06867e21dd323b53d03c666e504dbcc0a2b76d8831a83e123fb0"} Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.665214 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.682074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.842498 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgmm2\" (UniqueName: \"kubernetes.io/projected/3570f709-2b7b-43f0-9f27-06df057c22d8-kube-api-access-tgmm2\") pod \"3570f709-2b7b-43f0-9f27-06df057c22d8\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.842685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-client-ca\") pod \"3570f709-2b7b-43f0-9f27-06df057c22d8\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.842821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3570f709-2b7b-43f0-9f27-06df057c22d8-serving-cert\") pod \"3570f709-2b7b-43f0-9f27-06df057c22d8\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.842913 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-config\") pod \"3570f709-2b7b-43f0-9f27-06df057c22d8\" (UID: \"3570f709-2b7b-43f0-9f27-06df057c22d8\") " Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.845654 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-config" (OuterVolumeSpecName: "config") pod "3570f709-2b7b-43f0-9f27-06df057c22d8" (UID: "3570f709-2b7b-43f0-9f27-06df057c22d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.845946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-client-ca" (OuterVolumeSpecName: "client-ca") pod "3570f709-2b7b-43f0-9f27-06df057c22d8" (UID: "3570f709-2b7b-43f0-9f27-06df057c22d8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.852168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3570f709-2b7b-43f0-9f27-06df057c22d8-kube-api-access-tgmm2" (OuterVolumeSpecName: "kube-api-access-tgmm2") pod "3570f709-2b7b-43f0-9f27-06df057c22d8" (UID: "3570f709-2b7b-43f0-9f27-06df057c22d8"). InnerVolumeSpecName "kube-api-access-tgmm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.853023 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3570f709-2b7b-43f0-9f27-06df057c22d8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3570f709-2b7b-43f0-9f27-06df057c22d8" (UID: "3570f709-2b7b-43f0-9f27-06df057c22d8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.944702 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3570f709-2b7b-43f0-9f27-06df057c22d8-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.944733 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.944743 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgmm2\" (UniqueName: \"kubernetes.io/projected/3570f709-2b7b-43f0-9f27-06df057c22d8-kube-api-access-tgmm2\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:29 crc kubenswrapper[4858]: I1122 07:23:29.944752 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3570f709-2b7b-43f0-9f27-06df057c22d8-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.672047 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.675523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" event={"ID":"0f22692d-f6a4-4b5b-8bde-65457a2c6dcd","Type":"ContainerStarted","Data":"d3e49f42cf4888382dc5b4b2695d2f03a315b039aa9a73f65da3d20e958ee5cd"} Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.675712 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.680907 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.691799 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cfbdfdf58-fjcx5" podStartSLOduration=3.6917849130000002 podStartE2EDuration="3.691784913s" podCreationTimestamp="2025-11-22 07:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:30.690455421 +0000 UTC m=+772.531878437" watchObservedRunningTime="2025-11-22 07:23:30.691784913 +0000 UTC m=+772.533207919" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.735082 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt"] Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.735931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.737000 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4"] Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.739401 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.739856 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.740167 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.741021 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bbb49b96-gx5r4"] Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.741114 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.741226 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.741635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.748144 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt"] Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.855820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vnqw\" (UniqueName: \"kubernetes.io/projected/5a72e69e-4312-4054-9c94-03673dee8cea-kube-api-access-8vnqw\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.855965 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a72e69e-4312-4054-9c94-03673dee8cea-client-ca\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.856016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a72e69e-4312-4054-9c94-03673dee8cea-config\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.856037 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a72e69e-4312-4054-9c94-03673dee8cea-serving-cert\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.956795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a72e69e-4312-4054-9c94-03673dee8cea-config\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.956840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a72e69e-4312-4054-9c94-03673dee8cea-serving-cert\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.956888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vnqw\" (UniqueName: \"kubernetes.io/projected/5a72e69e-4312-4054-9c94-03673dee8cea-kube-api-access-8vnqw\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.956921 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a72e69e-4312-4054-9c94-03673dee8cea-client-ca\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.957867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a72e69e-4312-4054-9c94-03673dee8cea-client-ca\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.958868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a72e69e-4312-4054-9c94-03673dee8cea-config\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.963153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a72e69e-4312-4054-9c94-03673dee8cea-serving-cert\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:30 crc kubenswrapper[4858]: I1122 07:23:30.972340 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vnqw\" (UniqueName: \"kubernetes.io/projected/5a72e69e-4312-4054-9c94-03673dee8cea-kube-api-access-8vnqw\") pod \"route-controller-manager-5fc7bb488b-gtndt\" (UID: \"5a72e69e-4312-4054-9c94-03673dee8cea\") " pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:31 crc kubenswrapper[4858]: I1122 07:23:31.058963 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:31 crc kubenswrapper[4858]: I1122 07:23:31.512483 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt"] Nov 22 07:23:31 crc kubenswrapper[4858]: I1122 07:23:31.542245 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3570f709-2b7b-43f0-9f27-06df057c22d8" path="/var/lib/kubelet/pods/3570f709-2b7b-43f0-9f27-06df057c22d8/volumes" Nov 22 07:23:31 crc kubenswrapper[4858]: I1122 07:23:31.678717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" event={"ID":"5a72e69e-4312-4054-9c94-03673dee8cea","Type":"ContainerStarted","Data":"32b16db7b91def1afbd3f83ecc43725425ba753749a01ebec2c4875c89629e20"} Nov 22 07:23:31 crc kubenswrapper[4858]: I1122 07:23:31.678770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" event={"ID":"5a72e69e-4312-4054-9c94-03673dee8cea","Type":"ContainerStarted","Data":"b2ade4e5fb26dd51346c5abfb2b4d2170a7890afea35e538f814d21bb8df0029"} Nov 22 07:23:31 crc kubenswrapper[4858]: I1122 07:23:31.700971 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" podStartSLOduration=2.700948664 podStartE2EDuration="2.700948664s" podCreationTimestamp="2025-11-22 07:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:31.698389221 +0000 UTC m=+773.539812237" watchObservedRunningTime="2025-11-22 07:23:31.700948664 +0000 UTC m=+773.542371670" Nov 22 07:23:32 crc kubenswrapper[4858]: I1122 07:23:32.683080 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:32 crc kubenswrapper[4858]: I1122 07:23:32.689651 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fc7bb488b-gtndt" Nov 22 07:23:35 crc kubenswrapper[4858]: I1122 07:23:35.879637 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:23:45 crc kubenswrapper[4858]: I1122 07:23:45.312608 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:23:45 crc kubenswrapper[4858]: I1122 07:23:45.313194 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:24:15 crc kubenswrapper[4858]: I1122 07:24:15.312008 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:24:15 crc kubenswrapper[4858]: I1122 07:24:15.312662 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:24:45 crc kubenswrapper[4858]: I1122 07:24:45.312776 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:24:45 crc kubenswrapper[4858]: I1122 07:24:45.314524 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:24:45 crc kubenswrapper[4858]: I1122 07:24:45.314647 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:24:45 crc kubenswrapper[4858]: I1122 07:24:45.315301 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b65c8333ff002bee1634062539fd2ee400b1e60a8967a4f14f466c0ca9ea940"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:24:45 crc kubenswrapper[4858]: I1122 07:24:45.315489 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://0b65c8333ff002bee1634062539fd2ee400b1e60a8967a4f14f466c0ca9ea940" gracePeriod=600 Nov 22 07:24:46 crc kubenswrapper[4858]: I1122 07:24:46.056764 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="0b65c8333ff002bee1634062539fd2ee400b1e60a8967a4f14f466c0ca9ea940" exitCode=0 Nov 22 07:24:46 crc kubenswrapper[4858]: I1122 07:24:46.056857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"0b65c8333ff002bee1634062539fd2ee400b1e60a8967a4f14f466c0ca9ea940"} Nov 22 07:24:46 crc kubenswrapper[4858]: I1122 07:24:46.057564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"ac027e3d7c7e0a4005b468f1a22c01f947af9a1def6d54df36df7ebb83715efb"} Nov 22 07:24:46 crc kubenswrapper[4858]: I1122 07:24:46.057588 4858 scope.go:117] "RemoveContainer" containerID="b42591c29d97789277a976ab9c2b059ee8eaa00c2fc3283207a5ea3642f045d5" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.379751 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78r4z"] Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.381450 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.390207 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78r4z"] Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.532682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-catalog-content\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.532735 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-utilities\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.532833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldcg\" (UniqueName: \"kubernetes.io/projected/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-kube-api-access-sldcg\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.579382 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tcr7v"] Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.580885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.599067 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcr7v"] Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.633865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldcg\" (UniqueName: \"kubernetes.io/projected/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-kube-api-access-sldcg\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.633978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-catalog-content\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.634005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-utilities\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.634507 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-utilities\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.634613 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-catalog-content\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.655641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldcg\" (UniqueName: \"kubernetes.io/projected/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-kube-api-access-sldcg\") pod \"community-operators-78r4z\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.699136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.735302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-catalog-content\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.735742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-utilities\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.735774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvnf\" (UniqueName: \"kubernetes.io/projected/39a9743a-162e-4068-8974-3d5c75b281f9-kube-api-access-thvnf\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.836875 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-catalog-content\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.836949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-utilities\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.836982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvnf\" (UniqueName: \"kubernetes.io/projected/39a9743a-162e-4068-8974-3d5c75b281f9-kube-api-access-thvnf\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.840624 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-catalog-content\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.840882 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-utilities\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.882136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvnf\" (UniqueName: \"kubernetes.io/projected/39a9743a-162e-4068-8974-3d5c75b281f9-kube-api-access-thvnf\") pod \"certified-operators-tcr7v\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.895229 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:25:51 crc kubenswrapper[4858]: I1122 07:25:51.967900 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78r4z"] Nov 22 07:25:52 crc kubenswrapper[4858]: I1122 07:25:52.173722 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcr7v"] Nov 22 07:25:52 crc kubenswrapper[4858]: I1122 07:25:52.537544 4858 generic.go:334] "Generic (PLEG): container finished" podID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerID="938835291ef7c45d837fb172c82ead0634aec83bcee43075182065bc383c1c6f" exitCode=0 Nov 22 07:25:52 crc kubenswrapper[4858]: I1122 07:25:52.537630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78r4z" event={"ID":"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e","Type":"ContainerDied","Data":"938835291ef7c45d837fb172c82ead0634aec83bcee43075182065bc383c1c6f"} Nov 22 07:25:52 crc kubenswrapper[4858]: I1122 07:25:52.537949 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78r4z" event={"ID":"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e","Type":"ContainerStarted","Data":"f2d9b87d66204f9949a0ead7d297c888bee705955a4f8360fea5395a0a3d0f9d"} Nov 22 07:25:52 crc kubenswrapper[4858]: I1122 07:25:52.540402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerStarted","Data":"61069b92dc6578153acfd7357949cabfd5c7c7f7e1411efb1f8d7a7fd98140bb"} Nov 22 07:25:52 crc kubenswrapper[4858]: I1122 07:25:52.540455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerStarted","Data":"5af8bc8a34127e4a1919da569e171e3169a3ca596cba4653ab4d706a0c86e6e0"} Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.546420 4858 generic.go:334] "Generic (PLEG): container finished" podID="39a9743a-162e-4068-8974-3d5c75b281f9" containerID="61069b92dc6578153acfd7357949cabfd5c7c7f7e1411efb1f8d7a7fd98140bb" exitCode=0 Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.546517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerDied","Data":"61069b92dc6578153acfd7357949cabfd5c7c7f7e1411efb1f8d7a7fd98140bb"} Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.547853 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.775446 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pfqmj"] Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.778053 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.788656 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfqmj"] Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.862165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfjq\" (UniqueName: \"kubernetes.io/projected/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-kube-api-access-dbfjq\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.862457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-utilities\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.862625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-catalog-content\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.963744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-catalog-content\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.963845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbfjq\" (UniqueName: \"kubernetes.io/projected/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-kube-api-access-dbfjq\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.963900 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-utilities\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.964536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-utilities\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.964777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-catalog-content\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.982125 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4j5wz"] Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.983467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:53 crc kubenswrapper[4858]: I1122 07:25:53.995550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbfjq\" (UniqueName: \"kubernetes.io/projected/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-kube-api-access-dbfjq\") pod \"redhat-marketplace-pfqmj\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.000212 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4j5wz"] Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.065495 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-utilities\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.065546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrq8g\" (UniqueName: \"kubernetes.io/projected/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-kube-api-access-mrq8g\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.065703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-catalog-content\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.095784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.166800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-catalog-content\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.166897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-utilities\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.166926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrq8g\" (UniqueName: \"kubernetes.io/projected/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-kube-api-access-mrq8g\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.167843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-utilities\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.173594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-catalog-content\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.186438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrq8g\" (UniqueName: \"kubernetes.io/projected/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-kube-api-access-mrq8g\") pod \"redhat-operators-4j5wz\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.325509 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.331140 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfqmj"] Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.559838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfqmj" event={"ID":"da65e286-bb95-4ca1-8492-7d9d7a98d7ec","Type":"ContainerStarted","Data":"5d0f85d09a4c0af175a80a77431a0e8bb79b29f9a01884aff5ba155360a2d46f"} Nov 22 07:25:54 crc kubenswrapper[4858]: I1122 07:25:54.663016 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4j5wz"] Nov 22 07:25:54 crc kubenswrapper[4858]: W1122 07:25:54.672538 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cf5494b_5560_4ea5_92f3_6140edc6f1f4.slice/crio-60ae6639e05824240e94fe977fd4520d2147b98445d1fe89fb4e3c874201123c WatchSource:0}: Error finding container 60ae6639e05824240e94fe977fd4520d2147b98445d1fe89fb4e3c874201123c: Status 404 returned error can't find the container with id 60ae6639e05824240e94fe977fd4520d2147b98445d1fe89fb4e3c874201123c Nov 22 07:25:55 crc kubenswrapper[4858]: I1122 07:25:55.568849 4858 generic.go:334] "Generic (PLEG): container finished" podID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerID="95585be8a64fd1b9520f4d2f8c73962e38df659958914913d62c0fe407f2ccfd" exitCode=0 Nov 22 07:25:55 crc kubenswrapper[4858]: I1122 07:25:55.569008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfqmj" event={"ID":"da65e286-bb95-4ca1-8492-7d9d7a98d7ec","Type":"ContainerDied","Data":"95585be8a64fd1b9520f4d2f8c73962e38df659958914913d62c0fe407f2ccfd"} Nov 22 07:25:55 crc kubenswrapper[4858]: I1122 07:25:55.573262 4858 generic.go:334] "Generic (PLEG): container finished" podID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerID="ebfe400ea74e39d5525b90b5c80937a6d87b7403fafc47cf31a3cc8267c634d4" exitCode=0 Nov 22 07:25:55 crc kubenswrapper[4858]: I1122 07:25:55.573347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerDied","Data":"ebfe400ea74e39d5525b90b5c80937a6d87b7403fafc47cf31a3cc8267c634d4"} Nov 22 07:25:55 crc kubenswrapper[4858]: I1122 07:25:55.573417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerStarted","Data":"60ae6639e05824240e94fe977fd4520d2147b98445d1fe89fb4e3c874201123c"} Nov 22 07:25:59 crc kubenswrapper[4858]: I1122 07:25:59.593746 4858 generic.go:334] "Generic (PLEG): container finished" podID="39a9743a-162e-4068-8974-3d5c75b281f9" containerID="709fe10bef7e3a699c3a7b06d0b7009018a61df381370a0af60424d8d00d808e" exitCode=0 Nov 22 07:25:59 crc kubenswrapper[4858]: I1122 07:25:59.593897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerDied","Data":"709fe10bef7e3a699c3a7b06d0b7009018a61df381370a0af60424d8d00d808e"} Nov 22 07:25:59 crc kubenswrapper[4858]: I1122 07:25:59.595473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78r4z" event={"ID":"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e","Type":"ContainerDied","Data":"3f707df063568e46eb4e2afe8ebf2e9b7f8c3e4eea8aa3dedbc7e700023b04bd"} Nov 22 07:25:59 crc kubenswrapper[4858]: I1122 07:25:59.595391 4858 generic.go:334] "Generic (PLEG): container finished" podID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerID="3f707df063568e46eb4e2afe8ebf2e9b7f8c3e4eea8aa3dedbc7e700023b04bd" exitCode=0 Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.631635 4858 generic.go:334] "Generic (PLEG): container finished" podID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerID="cfb08748c67cbd93fbba3a0c4e206050ee0208789ab68ac7953c9b67c0424a2d" exitCode=0 Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.631708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfqmj" event={"ID":"da65e286-bb95-4ca1-8492-7d9d7a98d7ec","Type":"ContainerDied","Data":"cfb08748c67cbd93fbba3a0c4e206050ee0208789ab68ac7953c9b67c0424a2d"} Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.633896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerStarted","Data":"9cbb99bb76dd2c3c2c4a13bb891511d9d67f72e21486d98504c50fe7953f501f"} Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.636587 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78r4z" event={"ID":"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e","Type":"ContainerStarted","Data":"318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b"} Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.638838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerStarted","Data":"059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae"} Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.688008 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tcr7v" podStartSLOduration=3.842255278 podStartE2EDuration="15.687987516s" podCreationTimestamp="2025-11-22 07:25:51 +0000 UTC" firstStartedPulling="2025-11-22 07:25:53.547989692 +0000 UTC m=+915.389412698" lastFinishedPulling="2025-11-22 07:26:05.39372193 +0000 UTC m=+927.235144936" observedRunningTime="2025-11-22 07:26:06.686685584 +0000 UTC m=+928.528108590" watchObservedRunningTime="2025-11-22 07:26:06.687987516 +0000 UTC m=+928.529410522" Nov 22 07:26:06 crc kubenswrapper[4858]: I1122 07:26:06.688400 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78r4z" podStartSLOduration=3.903460228 podStartE2EDuration="15.688393538s" podCreationTimestamp="2025-11-22 07:25:51 +0000 UTC" firstStartedPulling="2025-11-22 07:25:53.547670041 +0000 UTC m=+915.389093047" lastFinishedPulling="2025-11-22 07:26:05.332603351 +0000 UTC m=+927.174026357" observedRunningTime="2025-11-22 07:26:06.669611977 +0000 UTC m=+928.511034983" watchObservedRunningTime="2025-11-22 07:26:06.688393538 +0000 UTC m=+928.529816555" Nov 22 07:26:07 crc kubenswrapper[4858]: I1122 07:26:07.648249 4858 generic.go:334] "Generic (PLEG): container finished" podID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerID="9cbb99bb76dd2c3c2c4a13bb891511d9d67f72e21486d98504c50fe7953f501f" exitCode=0 Nov 22 07:26:07 crc kubenswrapper[4858]: I1122 07:26:07.648349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerDied","Data":"9cbb99bb76dd2c3c2c4a13bb891511d9d67f72e21486d98504c50fe7953f501f"} Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.674920 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfqmj" event={"ID":"da65e286-bb95-4ca1-8492-7d9d7a98d7ec","Type":"ContainerStarted","Data":"722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e"} Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.701136 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.701221 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.750449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.896783 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.896853 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:26:11 crc kubenswrapper[4858]: I1122 07:26:11.937867 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:26:12 crc kubenswrapper[4858]: I1122 07:26:12.720675 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:26:12 crc kubenswrapper[4858]: I1122 07:26:12.735231 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:26:13 crc kubenswrapper[4858]: I1122 07:26:13.181479 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcr7v"] Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.096391 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.096760 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.140499 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.160786 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pfqmj" podStartSLOduration=8.224065267 podStartE2EDuration="21.160769256s" podCreationTimestamp="2025-11-22 07:25:53 +0000 UTC" firstStartedPulling="2025-11-22 07:25:56.51487164 +0000 UTC m=+918.356294656" lastFinishedPulling="2025-11-22 07:26:09.451575639 +0000 UTC m=+931.292998645" observedRunningTime="2025-11-22 07:26:13.704840632 +0000 UTC m=+935.546263638" watchObservedRunningTime="2025-11-22 07:26:14.160769256 +0000 UTC m=+936.002192262" Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.691621 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tcr7v" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="registry-server" containerID="cri-o://059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae" gracePeriod=2 Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.981747 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78r4z"] Nov 22 07:26:14 crc kubenswrapper[4858]: I1122 07:26:14.981982 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-78r4z" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="registry-server" containerID="cri-o://318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b" gracePeriod=2 Nov 22 07:26:15 crc kubenswrapper[4858]: I1122 07:26:15.737093 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:26:17 crc kubenswrapper[4858]: I1122 07:26:17.382071 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfqmj"] Nov 22 07:26:17 crc kubenswrapper[4858]: I1122 07:26:17.706908 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pfqmj" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="registry-server" containerID="cri-o://722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e" gracePeriod=2 Nov 22 07:26:18 crc kubenswrapper[4858]: I1122 07:26:18.724171 4858 generic.go:334] "Generic (PLEG): container finished" podID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerID="318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b" exitCode=0 Nov 22 07:26:18 crc kubenswrapper[4858]: I1122 07:26:18.724230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78r4z" event={"ID":"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e","Type":"ContainerDied","Data":"318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b"} Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.699917 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b is running failed: container process not found" containerID="318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.700715 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b is running failed: container process not found" containerID="318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.701076 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b is running failed: container process not found" containerID="318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.701139 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-78r4z" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="registry-server" Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.897917 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae is running failed: container process not found" containerID="059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.898551 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae is running failed: container process not found" containerID="059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.899117 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae is running failed: container process not found" containerID="059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:21 crc kubenswrapper[4858]: E1122 07:26:21.899165 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-tcr7v" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="registry-server" Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.251873 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcr7v_39a9743a-162e-4068-8974-3d5c75b281f9/registry-server/0.log" Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.253314 4858 generic.go:334] "Generic (PLEG): container finished" podID="39a9743a-162e-4068-8974-3d5c75b281f9" containerID="059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae" exitCode=137 Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.253355 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerDied","Data":"059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae"} Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.755410 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.875756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-utilities\") pod \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.875834 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sldcg\" (UniqueName: \"kubernetes.io/projected/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-kube-api-access-sldcg\") pod \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.875873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-catalog-content\") pod \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\" (UID: \"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e\") " Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.877017 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-utilities" (OuterVolumeSpecName: "utilities") pod "65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" (UID: "65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.882388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-kube-api-access-sldcg" (OuterVolumeSpecName: "kube-api-access-sldcg") pod "65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" (UID: "65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e"). InnerVolumeSpecName "kube-api-access-sldcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.977419 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:22 crc kubenswrapper[4858]: I1122 07:26:22.977470 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sldcg\" (UniqueName: \"kubernetes.io/projected/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-kube-api-access-sldcg\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.261250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78r4z" event={"ID":"65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e","Type":"ContainerDied","Data":"f2d9b87d66204f9949a0ead7d297c888bee705955a4f8360fea5395a0a3d0f9d"} Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.261687 4858 scope.go:117] "RemoveContainer" containerID="318ec9bfc454d7818e80a31265980a4a290c1698778888cd74dc1ec1f011218b" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.261643 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78r4z" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.277024 4858 scope.go:117] "RemoveContainer" containerID="3f707df063568e46eb4e2afe8ebf2e9b7f8c3e4eea8aa3dedbc7e700023b04bd" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.293161 4858 scope.go:117] "RemoveContainer" containerID="938835291ef7c45d837fb172c82ead0634aec83bcee43075182065bc383c1c6f" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.467087 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcr7v_39a9743a-162e-4068-8974-3d5c75b281f9/registry-server/0.log" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.468114 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.483643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thvnf\" (UniqueName: \"kubernetes.io/projected/39a9743a-162e-4068-8974-3d5c75b281f9-kube-api-access-thvnf\") pod \"39a9743a-162e-4068-8974-3d5c75b281f9\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.483726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-catalog-content\") pod \"39a9743a-162e-4068-8974-3d5c75b281f9\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.483813 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-utilities\") pod \"39a9743a-162e-4068-8974-3d5c75b281f9\" (UID: \"39a9743a-162e-4068-8974-3d5c75b281f9\") " Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.485459 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-utilities" (OuterVolumeSpecName: "utilities") pod "39a9743a-162e-4068-8974-3d5c75b281f9" (UID: "39a9743a-162e-4068-8974-3d5c75b281f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.488135 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a9743a-162e-4068-8974-3d5c75b281f9-kube-api-access-thvnf" (OuterVolumeSpecName: "kube-api-access-thvnf") pod "39a9743a-162e-4068-8974-3d5c75b281f9" (UID: "39a9743a-162e-4068-8974-3d5c75b281f9"). InnerVolumeSpecName "kube-api-access-thvnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.584930 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thvnf\" (UniqueName: \"kubernetes.io/projected/39a9743a-162e-4068-8974-3d5c75b281f9-kube-api-access-thvnf\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:23 crc kubenswrapper[4858]: I1122 07:26:23.584965 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:24 crc kubenswrapper[4858]: E1122 07:26:24.098019 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e is running failed: container process not found" containerID="722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:24 crc kubenswrapper[4858]: E1122 07:26:24.098582 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e is running failed: container process not found" containerID="722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:24 crc kubenswrapper[4858]: E1122 07:26:24.098860 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e is running failed: container process not found" containerID="722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:26:24 crc kubenswrapper[4858]: E1122 07:26:24.098896 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-pfqmj" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="registry-server" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.271958 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcr7v_39a9743a-162e-4068-8974-3d5c75b281f9/registry-server/0.log" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.274430 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcr7v" event={"ID":"39a9743a-162e-4068-8974-3d5c75b281f9","Type":"ContainerDied","Data":"5af8bc8a34127e4a1919da569e171e3169a3ca596cba4653ab4d706a0c86e6e0"} Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.274467 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcr7v" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.274504 4858 scope.go:117] "RemoveContainer" containerID="059c3245a45bb4a50db0e72b807dccef197cc2ac87f43771f64502a4731520ae" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.292419 4858 scope.go:117] "RemoveContainer" containerID="709fe10bef7e3a699c3a7b06d0b7009018a61df381370a0af60424d8d00d808e" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.314518 4858 scope.go:117] "RemoveContainer" containerID="61069b92dc6578153acfd7357949cabfd5c7c7f7e1411efb1f8d7a7fd98140bb" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.786807 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pfqmj_da65e286-bb95-4ca1-8492-7d9d7a98d7ec/registry-server/0.log" Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.791832 4858 generic.go:334] "Generic (PLEG): container finished" podID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerID="722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e" exitCode=137 Nov 22 07:26:24 crc kubenswrapper[4858]: I1122 07:26:24.791923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfqmj" event={"ID":"da65e286-bb95-4ca1-8492-7d9d7a98d7ec","Type":"ContainerDied","Data":"722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e"} Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.052032 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pfqmj_da65e286-bb95-4ca1-8492-7d9d7a98d7ec/registry-server/0.log" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.053648 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.134915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39a9743a-162e-4068-8974-3d5c75b281f9" (UID: "39a9743a-162e-4068-8974-3d5c75b281f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.204453 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcr7v"] Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.204749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-catalog-content\") pod \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.204817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-utilities\") pod \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.204861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbfjq\" (UniqueName: \"kubernetes.io/projected/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-kube-api-access-dbfjq\") pod \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\" (UID: \"da65e286-bb95-4ca1-8492-7d9d7a98d7ec\") " Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.205044 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9743a-162e-4068-8974-3d5c75b281f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.205782 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-utilities" (OuterVolumeSpecName: "utilities") pod "da65e286-bb95-4ca1-8492-7d9d7a98d7ec" (UID: "da65e286-bb95-4ca1-8492-7d9d7a98d7ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.207432 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da65e286-bb95-4ca1-8492-7d9d7a98d7ec" (UID: "da65e286-bb95-4ca1-8492-7d9d7a98d7ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.209685 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tcr7v"] Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.306343 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.306378 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.542436 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" path="/var/lib/kubelet/pods/39a9743a-162e-4068-8974-3d5c75b281f9/volumes" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.621920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-kube-api-access-dbfjq" (OuterVolumeSpecName: "kube-api-access-dbfjq") pod "da65e286-bb95-4ca1-8492-7d9d7a98d7ec" (UID: "da65e286-bb95-4ca1-8492-7d9d7a98d7ec"). InnerVolumeSpecName "kube-api-access-dbfjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.697262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" (UID: "65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.712212 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbfjq\" (UniqueName: \"kubernetes.io/projected/da65e286-bb95-4ca1-8492-7d9d7a98d7ec-kube-api-access-dbfjq\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.712303 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.804924 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pfqmj_da65e286-bb95-4ca1-8492-7d9d7a98d7ec/registry-server/0.log" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.805735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfqmj" event={"ID":"da65e286-bb95-4ca1-8492-7d9d7a98d7ec","Type":"ContainerDied","Data":"5d0f85d09a4c0af175a80a77431a0e8bb79b29f9a01884aff5ba155360a2d46f"} Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.805800 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfqmj" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.805815 4858 scope.go:117] "RemoveContainer" containerID="722dae67fa5c0829e73cfc51832384a903622cb6e584fbe57f174399310f705e" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.808219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerStarted","Data":"c7bf97126eec168504c185dd4c40e51971ace91757842ce175915b1db0693b92"} Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.820540 4858 scope.go:117] "RemoveContainer" containerID="cfb08748c67cbd93fbba3a0c4e206050ee0208789ab68ac7953c9b67c0424a2d" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.829617 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4j5wz" podStartSLOduration=9.750022761 podStartE2EDuration="32.829597927s" podCreationTimestamp="2025-11-22 07:25:53 +0000 UTC" firstStartedPulling="2025-11-22 07:25:56.514891111 +0000 UTC m=+918.356314117" lastFinishedPulling="2025-11-22 07:26:19.594466277 +0000 UTC m=+941.435889283" observedRunningTime="2025-11-22 07:26:25.826065414 +0000 UTC m=+947.667488420" watchObservedRunningTime="2025-11-22 07:26:25.829597927 +0000 UTC m=+947.671020933" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.839240 4858 scope.go:117] "RemoveContainer" containerID="95585be8a64fd1b9520f4d2f8c73962e38df659958914913d62c0fe407f2ccfd" Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.840234 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfqmj"] Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.843590 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfqmj"] Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.994351 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78r4z"] Nov 22 07:26:25 crc kubenswrapper[4858]: I1122 07:26:25.999164 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-78r4z"] Nov 22 07:26:27 crc kubenswrapper[4858]: I1122 07:26:27.553476 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" path="/var/lib/kubelet/pods/65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e/volumes" Nov 22 07:26:27 crc kubenswrapper[4858]: I1122 07:26:27.554557 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" path="/var/lib/kubelet/pods/da65e286-bb95-4ca1-8492-7d9d7a98d7ec/volumes" Nov 22 07:26:34 crc kubenswrapper[4858]: I1122 07:26:34.326241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:26:34 crc kubenswrapper[4858]: I1122 07:26:34.327867 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:26:34 crc kubenswrapper[4858]: I1122 07:26:34.375225 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:26:34 crc kubenswrapper[4858]: I1122 07:26:34.895656 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:26:34 crc kubenswrapper[4858]: I1122 07:26:34.938460 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4j5wz"] Nov 22 07:26:36 crc kubenswrapper[4858]: I1122 07:26:36.864110 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4j5wz" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="registry-server" containerID="cri-o://c7bf97126eec168504c185dd4c40e51971ace91757842ce175915b1db0693b92" gracePeriod=2 Nov 22 07:26:37 crc kubenswrapper[4858]: I1122 07:26:37.870233 4858 generic.go:334] "Generic (PLEG): container finished" podID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerID="c7bf97126eec168504c185dd4c40e51971ace91757842ce175915b1db0693b92" exitCode=0 Nov 22 07:26:37 crc kubenswrapper[4858]: I1122 07:26:37.870335 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerDied","Data":"c7bf97126eec168504c185dd4c40e51971ace91757842ce175915b1db0693b92"} Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.246184 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.371436 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrq8g\" (UniqueName: \"kubernetes.io/projected/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-kube-api-access-mrq8g\") pod \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.371688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-catalog-content\") pod \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.371758 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-utilities\") pod \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\" (UID: \"6cf5494b-5560-4ea5-92f3-6140edc6f1f4\") " Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.372744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-utilities" (OuterVolumeSpecName: "utilities") pod "6cf5494b-5560-4ea5-92f3-6140edc6f1f4" (UID: "6cf5494b-5560-4ea5-92f3-6140edc6f1f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.378768 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-kube-api-access-mrq8g" (OuterVolumeSpecName: "kube-api-access-mrq8g") pod "6cf5494b-5560-4ea5-92f3-6140edc6f1f4" (UID: "6cf5494b-5560-4ea5-92f3-6140edc6f1f4"). InnerVolumeSpecName "kube-api-access-mrq8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.460481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cf5494b-5560-4ea5-92f3-6140edc6f1f4" (UID: "6cf5494b-5560-4ea5-92f3-6140edc6f1f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.474006 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.474047 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrq8g\" (UniqueName: \"kubernetes.io/projected/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-kube-api-access-mrq8g\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.474076 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf5494b-5560-4ea5-92f3-6140edc6f1f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.879292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j5wz" event={"ID":"6cf5494b-5560-4ea5-92f3-6140edc6f1f4","Type":"ContainerDied","Data":"60ae6639e05824240e94fe977fd4520d2147b98445d1fe89fb4e3c874201123c"} Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.879380 4858 scope.go:117] "RemoveContainer" containerID="c7bf97126eec168504c185dd4c40e51971ace91757842ce175915b1db0693b92" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.879549 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j5wz" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.893636 4858 scope.go:117] "RemoveContainer" containerID="9cbb99bb76dd2c3c2c4a13bb891511d9d67f72e21486d98504c50fe7953f501f" Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.910283 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4j5wz"] Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.912659 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4j5wz"] Nov 22 07:26:38 crc kubenswrapper[4858]: I1122 07:26:38.917484 4858 scope.go:117] "RemoveContainer" containerID="ebfe400ea74e39d5525b90b5c80937a6d87b7403fafc47cf31a3cc8267c634d4" Nov 22 07:26:39 crc kubenswrapper[4858]: I1122 07:26:39.542170 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" path="/var/lib/kubelet/pods/6cf5494b-5560-4ea5-92f3-6140edc6f1f4/volumes" Nov 22 07:26:45 crc kubenswrapper[4858]: I1122 07:26:45.312533 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:26:45 crc kubenswrapper[4858]: I1122 07:26:45.313496 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:27:15 crc kubenswrapper[4858]: I1122 07:27:15.312119 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:27:15 crc kubenswrapper[4858]: I1122 07:27:15.312727 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:27:45 crc kubenswrapper[4858]: I1122 07:27:45.312409 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:27:45 crc kubenswrapper[4858]: I1122 07:27:45.312984 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:27:45 crc kubenswrapper[4858]: I1122 07:27:45.313079 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:27:45 crc kubenswrapper[4858]: I1122 07:27:45.313736 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac027e3d7c7e0a4005b468f1a22c01f947af9a1def6d54df36df7ebb83715efb"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:27:45 crc kubenswrapper[4858]: I1122 07:27:45.313857 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://ac027e3d7c7e0a4005b468f1a22c01f947af9a1def6d54df36df7ebb83715efb" gracePeriod=600 Nov 22 07:27:46 crc kubenswrapper[4858]: I1122 07:27:46.233765 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="ac027e3d7c7e0a4005b468f1a22c01f947af9a1def6d54df36df7ebb83715efb" exitCode=0 Nov 22 07:27:46 crc kubenswrapper[4858]: I1122 07:27:46.233802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"ac027e3d7c7e0a4005b468f1a22c01f947af9a1def6d54df36df7ebb83715efb"} Nov 22 07:27:46 crc kubenswrapper[4858]: I1122 07:27:46.234073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"7f316aa9fc732e6e2efa18f2b02b78cbee761cabcd6a33c8efb9930c2da311b8"} Nov 22 07:27:46 crc kubenswrapper[4858]: I1122 07:27:46.234092 4858 scope.go:117] "RemoveContainer" containerID="0b65c8333ff002bee1634062539fd2ee400b1e60a8967a4f14f466c0ca9ea940" Nov 22 07:28:08 crc kubenswrapper[4858]: E1122 07:28:08.735918 4858 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.201s" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.143671 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68"] Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144535 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144553 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144562 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144568 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144577 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144583 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144590 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144596 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144604 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144610 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144620 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144625 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144632 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144638 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144647 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144654 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144665 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144670 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144680 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144688 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="extract-content" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144698 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144704 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: E1122 07:30:00.144711 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144717 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="extract-utilities" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144802 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="65fdebc6-cbd1-4cbf-bc3a-e97b22dc2c6e" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144811 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="da65e286-bb95-4ca1-8492-7d9d7a98d7ec" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144823 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a9743a-162e-4068-8974-3d5c75b281f9" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.144829 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf5494b-5560-4ea5-92f3-6140edc6f1f4" containerName="registry-server" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.145265 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.149132 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.149420 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.163084 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68"] Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.223132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec7658a-c01c-4e79-9c95-c591bc5af55d-secret-volume\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.223417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec7658a-c01c-4e79-9c95-c591bc5af55d-config-volume\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.223561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkf2\" (UniqueName: \"kubernetes.io/projected/bec7658a-c01c-4e79-9c95-c591bc5af55d-kube-api-access-rjkf2\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.324354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec7658a-c01c-4e79-9c95-c591bc5af55d-secret-volume\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.324428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec7658a-c01c-4e79-9c95-c591bc5af55d-config-volume\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.324478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkf2\" (UniqueName: \"kubernetes.io/projected/bec7658a-c01c-4e79-9c95-c591bc5af55d-kube-api-access-rjkf2\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.325786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec7658a-c01c-4e79-9c95-c591bc5af55d-config-volume\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.341407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec7658a-c01c-4e79-9c95-c591bc5af55d-secret-volume\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.344656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkf2\" (UniqueName: \"kubernetes.io/projected/bec7658a-c01c-4e79-9c95-c591bc5af55d-kube-api-access-rjkf2\") pod \"collect-profiles-29396610-xdf68\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.471053 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:00 crc kubenswrapper[4858]: I1122 07:30:00.676603 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68"] Nov 22 07:30:01 crc kubenswrapper[4858]: I1122 07:30:01.318435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" event={"ID":"bec7658a-c01c-4e79-9c95-c591bc5af55d","Type":"ContainerStarted","Data":"4657dfdf3eb48401aaa4b4e7def219ddc57ab48859ff33eabe4f3fdb2760a299"} Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.327363 4858 generic.go:334] "Generic (PLEG): container finished" podID="bec7658a-c01c-4e79-9c95-c591bc5af55d" containerID="42e9dbf0d5ef9081d0abc2ecbd765b454768fad3024d85081fdcfabbc9bec948" exitCode=0 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.327548 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" event={"ID":"bec7658a-c01c-4e79-9c95-c591bc5af55d","Type":"ContainerDied","Data":"42e9dbf0d5ef9081d0abc2ecbd765b454768fad3024d85081fdcfabbc9bec948"} Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.871581 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncp4k"] Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.872002 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-controller" containerID="cri-o://0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.872057 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="nbdb" containerID="cri-o://48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.872130 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="sbdb" containerID="cri-o://3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.872197 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-node" containerID="cri-o://5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.872173 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="northd" containerID="cri-o://923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.872305 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-acl-logging" containerID="cri-o://df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.873924 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2" gracePeriod=30 Nov 22 07:30:02 crc kubenswrapper[4858]: I1122 07:30:02.907725 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" containerID="cri-o://00929b1df5a26e30a2c1037684fb453f88d28c10c6b8fded2bae6634a9c69e77" gracePeriod=30 Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.335788 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/3.log" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.340404 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-acl-logging/0.log" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.341813 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553" exitCode=143 Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.341929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553"} Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.373481 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.469854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec7658a-c01c-4e79-9c95-c591bc5af55d-config-volume\") pod \"bec7658a-c01c-4e79-9c95-c591bc5af55d\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.469993 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec7658a-c01c-4e79-9c95-c591bc5af55d-secret-volume\") pod \"bec7658a-c01c-4e79-9c95-c591bc5af55d\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.470038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkf2\" (UniqueName: \"kubernetes.io/projected/bec7658a-c01c-4e79-9c95-c591bc5af55d-kube-api-access-rjkf2\") pod \"bec7658a-c01c-4e79-9c95-c591bc5af55d\" (UID: \"bec7658a-c01c-4e79-9c95-c591bc5af55d\") " Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.471551 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec7658a-c01c-4e79-9c95-c591bc5af55d-config-volume" (OuterVolumeSpecName: "config-volume") pod "bec7658a-c01c-4e79-9c95-c591bc5af55d" (UID: "bec7658a-c01c-4e79-9c95-c591bc5af55d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.475166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec7658a-c01c-4e79-9c95-c591bc5af55d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bec7658a-c01c-4e79-9c95-c591bc5af55d" (UID: "bec7658a-c01c-4e79-9c95-c591bc5af55d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.475237 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec7658a-c01c-4e79-9c95-c591bc5af55d-kube-api-access-rjkf2" (OuterVolumeSpecName: "kube-api-access-rjkf2") pod "bec7658a-c01c-4e79-9c95-c591bc5af55d" (UID: "bec7658a-c01c-4e79-9c95-c591bc5af55d"). InnerVolumeSpecName "kube-api-access-rjkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.571497 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkf2\" (UniqueName: \"kubernetes.io/projected/bec7658a-c01c-4e79-9c95-c591bc5af55d-kube-api-access-rjkf2\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.571553 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec7658a-c01c-4e79-9c95-c591bc5af55d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:03 crc kubenswrapper[4858]: I1122 07:30:03.571566 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec7658a-c01c-4e79-9c95-c591bc5af55d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.349845 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovnkube-controller/3.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.352882 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-acl-logging/0.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353442 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-controller/0.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353843 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="00929b1df5a26e30a2c1037684fb453f88d28c10c6b8fded2bae6634a9c69e77" exitCode=0 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353867 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf" exitCode=0 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353875 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084" exitCode=0 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353883 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36" exitCode=0 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353889 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2" exitCode=0 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"00929b1df5a26e30a2c1037684fb453f88d28c10c6b8fded2bae6634a9c69e77"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353934 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d" exitCode=0 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353944 4858 generic.go:334] "Generic (PLEG): container finished" podID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerID="0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86" exitCode=143 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.353988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.354004 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.354016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.354027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.354047 4858 scope.go:117] "RemoveContainer" containerID="8e10b013672ae8934bb8499495d1944741ad07e1b3ce26cfc4c61441a82d6f4e" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.356465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" event={"ID":"bec7658a-c01c-4e79-9c95-c591bc5af55d","Type":"ContainerDied","Data":"4657dfdf3eb48401aaa4b4e7def219ddc57ab48859ff33eabe4f3fdb2760a299"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.356498 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4657dfdf3eb48401aaa4b4e7def219ddc57ab48859ff33eabe4f3fdb2760a299" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.356506 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.366674 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/2.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.367133 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/1.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.367178 4858 generic.go:334] "Generic (PLEG): container finished" podID="a6492476-649f-4291-81c3-e6f5a6398b70" containerID="19b7b8eef55f72f28c31aec38d6e3551fe3cdddeddb0d1c8f92ce3bec9d5c1d8" exitCode=2 Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.367211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerDied","Data":"19b7b8eef55f72f28c31aec38d6e3551fe3cdddeddb0d1c8f92ce3bec9d5c1d8"} Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.367730 4858 scope.go:117] "RemoveContainer" containerID="19b7b8eef55f72f28c31aec38d6e3551fe3cdddeddb0d1c8f92ce3bec9d5c1d8" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.568836 4858 scope.go:117] "RemoveContainer" containerID="ed070edbc516f853417d0d4a8be15fb0b79c183b961795b68984b7f96c8b292b" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.575815 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-acl-logging/0.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.576241 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-controller/0.log" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.576812 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655250 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx878"] Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655510 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655534 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655547 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655555 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655562 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-node" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655567 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-node" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655578 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="nbdb" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655584 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="nbdb" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655593 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kubecfg-setup" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655598 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kubecfg-setup" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655607 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="sbdb" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655612 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="sbdb" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655622 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655628 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655641 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-acl-logging" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655648 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-acl-logging" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655657 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655662 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655668 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655673 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655682 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec7658a-c01c-4e79-9c95-c591bc5af55d" containerName="collect-profiles" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655687 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec7658a-c01c-4e79-9c95-c591bc5af55d" containerName="collect-profiles" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655696 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="northd" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655701 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="northd" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655709 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655715 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655812 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655823 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655829 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec7658a-c01c-4e79-9c95-c591bc5af55d" containerName="collect-profiles" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655837 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655844 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="kube-rbac-proxy-node" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655851 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655857 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovn-acl-logging" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655865 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655873 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="sbdb" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655880 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="nbdb" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655887 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="northd" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: E1122 07:30:04.655979 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.655985 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.656083 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" containerName="ovnkube-controller" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.657913 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.685832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-openvswitch\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.685894 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-systemd\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.685924 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovn-node-metrics-cert\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.685951 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-node-log\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.685983 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-env-overrides\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.686004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-script-lib\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.685994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.686023 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-etc-openvswitch\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.686083 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.686136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-log-socket\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.686172 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-config\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.686241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-ovn\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.687817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-slash\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.687870 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-bin\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.687907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-ovn-kubernetes\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.687938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-kubelet\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.687977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-netns\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk6nb\" (UniqueName: \"kubernetes.io/projected/14e03227-73ca-4f1f-b3e0-28a197f72b42-kube-api-access-dk6nb\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-node-log" (OuterVolumeSpecName: "node-log") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688125 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688184 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-log-socket" (OuterVolumeSpecName: "log-socket") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688546 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-slash" (OuterVolumeSpecName: "host-slash") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.688683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.689053 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.689100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.696483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.696814 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697033 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e03227-73ca-4f1f-b3e0-28a197f72b42-kube-api-access-dk6nb" (OuterVolumeSpecName: "kube-api-access-dk6nb") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "kube-api-access-dk6nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697121 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697192 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-var-lib-openvswitch\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697307 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-systemd-units\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-netd\") pod \"14e03227-73ca-4f1f-b3e0-28a197f72b42\" (UID: \"14e03227-73ca-4f1f-b3e0-28a197f72b42\") " Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697559 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-cni-netd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-kubelet\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-run-netns\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-node-log\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-log-socket\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-var-lib-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697847 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjfd\" (UniqueName: \"kubernetes.io/projected/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-kube-api-access-xqjfd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-systemd-units\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovnkube-script-lib\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovn-node-metrics-cert\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.697983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-env-overrides\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698026 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovnkube-config\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698063 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-systemd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-slash\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-cni-bin\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-etc-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698193 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-ovn\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698256 4858 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-slash\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698277 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698292 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698303 4858 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698332 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698344 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk6nb\" (UniqueName: \"kubernetes.io/projected/14e03227-73ca-4f1f-b3e0-28a197f72b42-kube-api-access-dk6nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698358 4858 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698369 4858 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698380 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698391 4858 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-node-log\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698404 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698415 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698426 4858 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698437 4858 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-log-socket\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698448 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14e03227-73ca-4f1f-b3e0-28a197f72b42-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698459 4858 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698545 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.698569 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.703072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "14e03227-73ca-4f1f-b3e0-28a197f72b42" (UID: "14e03227-73ca-4f1f-b3e0-28a197f72b42"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-kubelet\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-run-netns\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-node-log\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-log-socket\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-var-lib-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjfd\" (UniqueName: \"kubernetes.io/projected/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-kube-api-access-xqjfd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-systemd-units\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovnkube-script-lib\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799918 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovn-node-metrics-cert\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799956 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-env-overrides\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovnkube-config\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.799992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-systemd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800011 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-slash\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800033 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-cni-bin\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-etc-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-ovn\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800114 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-cni-netd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800163 4858 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800180 4858 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800200 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800212 4858 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14e03227-73ca-4f1f-b3e0-28a197f72b42-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-log-socket\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-kubelet\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-run-netns\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-node-log\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-systemd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.800970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-cni-bin\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-slash\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-etc-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801048 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-var-lib-openvswitch\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovnkube-script-lib\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-systemd-units\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-run-ovn\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801509 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-host-cni-netd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801525 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-env-overrides\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.801854 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovnkube-config\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.805857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-ovn-node-metrics-cert\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.819976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjfd\" (UniqueName: \"kubernetes.io/projected/b7e64f5c-48cc-49da-b6b6-7e977e9f0622-kube-api-access-xqjfd\") pod \"ovnkube-node-bx878\" (UID: \"b7e64f5c-48cc-49da-b6b6-7e977e9f0622\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: I1122 07:30:04.975371 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:04 crc kubenswrapper[4858]: W1122 07:30:04.991843 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7e64f5c_48cc_49da_b6b6_7e977e9f0622.slice/crio-459dcdcd9de87da016149a74206cf5f47b7059438932a1053c22ec6d67d41006 WatchSource:0}: Error finding container 459dcdcd9de87da016149a74206cf5f47b7059438932a1053c22ec6d67d41006: Status 404 returned error can't find the container with id 459dcdcd9de87da016149a74206cf5f47b7059438932a1053c22ec6d67d41006 Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.373857 4858 generic.go:334] "Generic (PLEG): container finished" podID="b7e64f5c-48cc-49da-b6b6-7e977e9f0622" containerID="5e11ca68546a8422711b0e20a000061e63cdf80b84fb1101dc70f2c4a3958caf" exitCode=0 Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.374003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerDied","Data":"5e11ca68546a8422711b0e20a000061e63cdf80b84fb1101dc70f2c4a3958caf"} Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.374177 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"459dcdcd9de87da016149a74206cf5f47b7059438932a1053c22ec6d67d41006"} Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.376655 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-56l5j_a6492476-649f-4291-81c3-e6f5a6398b70/kube-multus/2.log" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.377010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-56l5j" event={"ID":"a6492476-649f-4291-81c3-e6f5a6398b70","Type":"ContainerStarted","Data":"66137ec2e11d3a0a86d2c391d2b19eda13d8b7b35abfafc9874d7cb51db0f00e"} Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.379892 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-acl-logging/0.log" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.380418 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ncp4k_14e03227-73ca-4f1f-b3e0-28a197f72b42/ovn-controller/0.log" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.380778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" event={"ID":"14e03227-73ca-4f1f-b3e0-28a197f72b42","Type":"ContainerDied","Data":"7eac266bcff4c7c476e55991826351d6846f2174d30e9044d8bf0cb20d229fd0"} Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.380852 4858 scope.go:117] "RemoveContainer" containerID="00929b1df5a26e30a2c1037684fb453f88d28c10c6b8fded2bae6634a9c69e77" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.380995 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ncp4k" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.399425 4858 scope.go:117] "RemoveContainer" containerID="3a2b3f1d5cf05d2c5c266c2306eab1ecf432e5f2b5075e3ab0f8aa14873801cf" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.438748 4858 scope.go:117] "RemoveContainer" containerID="48cd24c3cbc8f298c5c9584789ef0f0c1f8df87d2f27c3efe25a63f33b9f2084" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.455898 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncp4k"] Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.460247 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ncp4k"] Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.464659 4858 scope.go:117] "RemoveContainer" containerID="923a1b658fee28a9f5f790f214e3fd1a6b66c8f256810f6c4fb009ba0ea8bb36" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.477404 4858 scope.go:117] "RemoveContainer" containerID="24b6944ac74f4643308efb74a7fbae49385af2fa5b2ed81471997050d01bdae2" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.492174 4858 scope.go:117] "RemoveContainer" containerID="5b6cf49428e2a61fe06c4dea3be8a9335289311d7174c290eadeaa91f4fbda1d" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.506613 4858 scope.go:117] "RemoveContainer" containerID="df3389a8b0169dc215b73a3748d4427780df5685df512c63a59fd4fd77c0d553" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.545397 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e03227-73ca-4f1f-b3e0-28a197f72b42" path="/var/lib/kubelet/pods/14e03227-73ca-4f1f-b3e0-28a197f72b42/volumes" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.585767 4858 scope.go:117] "RemoveContainer" containerID="0fb9645296126c9748b9254cabdbb68d2b65663d189629f048387d63632e0b86" Nov 22 07:30:05 crc kubenswrapper[4858]: I1122 07:30:05.601078 4858 scope.go:117] "RemoveContainer" containerID="9e1850a2c20ec041ab4ddf14543dece0b946ddd85c362168bcc3f48352d051e7" Nov 22 07:30:06 crc kubenswrapper[4858]: I1122 07:30:06.394397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"367ee8a9687d25e86ef6e275edc87ad2866f69dee41bdf389fe5e6ac6933e6ff"} Nov 22 07:30:06 crc kubenswrapper[4858]: I1122 07:30:06.394730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"84220ea031ed3ac34707e14e3deb0d7c50bc2341a6db74ec80d09aa11f875495"} Nov 22 07:30:07 crc kubenswrapper[4858]: I1122 07:30:07.408750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"41dcd750026ebbc6e2dc8668fae7eab55e4b728d6830844b6332ea28a9e17bc6"} Nov 22 07:30:07 crc kubenswrapper[4858]: I1122 07:30:07.409202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"c7f90cbdd6dda6d76f45afc3522b51fec5386970d15595ed24fd0f0fecee09ea"} Nov 22 07:30:08 crc kubenswrapper[4858]: I1122 07:30:08.431368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"edaddaf8204882bcefb5149ec147c1ced395bfb9fcd6a51743860660ac129d9f"} Nov 22 07:30:08 crc kubenswrapper[4858]: I1122 07:30:08.431715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"39e3a0a591ef9a66109a4387d6fdbe4b35a13d21f71e1f146bf971bbaf94d6ab"} Nov 22 07:30:10 crc kubenswrapper[4858]: I1122 07:30:10.444572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"c9a4eee3a4e90fbe07bc80c634a758ed08d33a2711bf869d0a6d82d7af939f13"} Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.469600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" event={"ID":"b7e64f5c-48cc-49da-b6b6-7e977e9f0622","Type":"ContainerStarted","Data":"8b3157dde978bcf14264d89b23d3ce1a4d1b852e66ecfd5ae181a7718590c08f"} Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.470266 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.470379 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.470488 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.501777 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.504234 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.506752 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" podStartSLOduration=9.506741286 podStartE2EDuration="9.506741286s" podCreationTimestamp="2025-11-22 07:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:13.503905565 +0000 UTC m=+1175.345328571" watchObservedRunningTime="2025-11-22 07:30:13.506741286 +0000 UTC m=+1175.348164292" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.682864 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hnxjz"] Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.683533 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.686056 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.686289 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.686506 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.691243 4858 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-dmxpx" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.692517 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hnxjz"] Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.759269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj8jv\" (UniqueName: \"kubernetes.io/projected/a3286385-d91f-471f-be8c-9b439311fa51-kube-api-access-zj8jv\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.759513 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a3286385-d91f-471f-be8c-9b439311fa51-crc-storage\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.759600 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a3286385-d91f-471f-be8c-9b439311fa51-node-mnt\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.860922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj8jv\" (UniqueName: \"kubernetes.io/projected/a3286385-d91f-471f-be8c-9b439311fa51-kube-api-access-zj8jv\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.860985 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a3286385-d91f-471f-be8c-9b439311fa51-crc-storage\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.861034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a3286385-d91f-471f-be8c-9b439311fa51-node-mnt\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.861447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a3286385-d91f-471f-be8c-9b439311fa51-node-mnt\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.862167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a3286385-d91f-471f-be8c-9b439311fa51-crc-storage\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:13 crc kubenswrapper[4858]: I1122 07:30:13.886221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj8jv\" (UniqueName: \"kubernetes.io/projected/a3286385-d91f-471f-be8c-9b439311fa51-kube-api-access-zj8jv\") pod \"crc-storage-crc-hnxjz\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: I1122 07:30:14.000734 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.036892 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(e66edfcde9eb45165ad7816299355fc06efccb9049b1973036be971271b6dff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.037005 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(e66edfcde9eb45165ad7816299355fc06efccb9049b1973036be971271b6dff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.037034 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(e66edfcde9eb45165ad7816299355fc06efccb9049b1973036be971271b6dff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.037094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hnxjz_crc-storage(a3286385-d91f-471f-be8c-9b439311fa51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hnxjz_crc-storage(a3286385-d91f-471f-be8c-9b439311fa51)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(e66edfcde9eb45165ad7816299355fc06efccb9049b1973036be971271b6dff1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hnxjz" podUID="a3286385-d91f-471f-be8c-9b439311fa51" Nov 22 07:30:14 crc kubenswrapper[4858]: I1122 07:30:14.475275 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: I1122 07:30:14.476144 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.508627 4858 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(c68829076ba23874887ebaa7256700475107ac55f8f345a032726631d3212640): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.508720 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(c68829076ba23874887ebaa7256700475107ac55f8f345a032726631d3212640): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.508752 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(c68829076ba23874887ebaa7256700475107ac55f8f345a032726631d3212640): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:14 crc kubenswrapper[4858]: E1122 07:30:14.508812 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hnxjz_crc-storage(a3286385-d91f-471f-be8c-9b439311fa51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hnxjz_crc-storage(a3286385-d91f-471f-be8c-9b439311fa51)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hnxjz_crc-storage_a3286385-d91f-471f-be8c-9b439311fa51_0(c68829076ba23874887ebaa7256700475107ac55f8f345a032726631d3212640): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hnxjz" podUID="a3286385-d91f-471f-be8c-9b439311fa51" Nov 22 07:30:15 crc kubenswrapper[4858]: I1122 07:30:15.313072 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:30:15 crc kubenswrapper[4858]: I1122 07:30:15.313154 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:30:29 crc kubenswrapper[4858]: I1122 07:30:29.535231 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:29 crc kubenswrapper[4858]: I1122 07:30:29.539113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:29 crc kubenswrapper[4858]: I1122 07:30:29.734087 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hnxjz"] Nov 22 07:30:30 crc kubenswrapper[4858]: I1122 07:30:30.569574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hnxjz" event={"ID":"a3286385-d91f-471f-be8c-9b439311fa51","Type":"ContainerStarted","Data":"54ae7d6163ad6983a61d1b42ceab2e13cc438099b2a5ad272a9b801920dc702c"} Nov 22 07:30:35 crc kubenswrapper[4858]: I1122 07:30:35.001083 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx878" Nov 22 07:30:39 crc kubenswrapper[4858]: I1122 07:30:39.614188 4858 generic.go:334] "Generic (PLEG): container finished" podID="a3286385-d91f-471f-be8c-9b439311fa51" containerID="dd27dcd3b7ce6d59c9ef85714b0507446c6f31a57249203660fa91083e5f9df3" exitCode=0 Nov 22 07:30:39 crc kubenswrapper[4858]: I1122 07:30:39.614266 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hnxjz" event={"ID":"a3286385-d91f-471f-be8c-9b439311fa51","Type":"ContainerDied","Data":"dd27dcd3b7ce6d59c9ef85714b0507446c6f31a57249203660fa91083e5f9df3"} Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.835774 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.896243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a3286385-d91f-471f-be8c-9b439311fa51-crc-storage\") pod \"a3286385-d91f-471f-be8c-9b439311fa51\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.896358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj8jv\" (UniqueName: \"kubernetes.io/projected/a3286385-d91f-471f-be8c-9b439311fa51-kube-api-access-zj8jv\") pod \"a3286385-d91f-471f-be8c-9b439311fa51\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.896402 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a3286385-d91f-471f-be8c-9b439311fa51-node-mnt\") pod \"a3286385-d91f-471f-be8c-9b439311fa51\" (UID: \"a3286385-d91f-471f-be8c-9b439311fa51\") " Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.896750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3286385-d91f-471f-be8c-9b439311fa51-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a3286385-d91f-471f-be8c-9b439311fa51" (UID: "a3286385-d91f-471f-be8c-9b439311fa51"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.902749 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3286385-d91f-471f-be8c-9b439311fa51-kube-api-access-zj8jv" (OuterVolumeSpecName: "kube-api-access-zj8jv") pod "a3286385-d91f-471f-be8c-9b439311fa51" (UID: "a3286385-d91f-471f-be8c-9b439311fa51"). InnerVolumeSpecName "kube-api-access-zj8jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.911470 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3286385-d91f-471f-be8c-9b439311fa51-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a3286385-d91f-471f-be8c-9b439311fa51" (UID: "a3286385-d91f-471f-be8c-9b439311fa51"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.998243 4858 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a3286385-d91f-471f-be8c-9b439311fa51-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.998302 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj8jv\" (UniqueName: \"kubernetes.io/projected/a3286385-d91f-471f-be8c-9b439311fa51-kube-api-access-zj8jv\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:40 crc kubenswrapper[4858]: I1122 07:30:40.998312 4858 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a3286385-d91f-471f-be8c-9b439311fa51-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:41 crc kubenswrapper[4858]: I1122 07:30:41.626870 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hnxjz" event={"ID":"a3286385-d91f-471f-be8c-9b439311fa51","Type":"ContainerDied","Data":"54ae7d6163ad6983a61d1b42ceab2e13cc438099b2a5ad272a9b801920dc702c"} Nov 22 07:30:41 crc kubenswrapper[4858]: I1122 07:30:41.626913 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ae7d6163ad6983a61d1b42ceab2e13cc438099b2a5ad272a9b801920dc702c" Nov 22 07:30:41 crc kubenswrapper[4858]: I1122 07:30:41.626919 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hnxjz" Nov 22 07:30:45 crc kubenswrapper[4858]: I1122 07:30:45.311877 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:30:45 crc kubenswrapper[4858]: I1122 07:30:45.312268 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.800014 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt"] Nov 22 07:30:47 crc kubenswrapper[4858]: E1122 07:30:47.800551 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3286385-d91f-471f-be8c-9b439311fa51" containerName="storage" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.800564 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3286385-d91f-471f-be8c-9b439311fa51" containerName="storage" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.800660 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3286385-d91f-471f-be8c-9b439311fa51" containerName="storage" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.801368 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.804009 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.811730 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt"] Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.880826 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77c2g\" (UniqueName: \"kubernetes.io/projected/3fe936f3-d183-4438-8eb7-4357e52d4efb-kube-api-access-77c2g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.880918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.880962 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.981621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77c2g\" (UniqueName: \"kubernetes.io/projected/3fe936f3-d183-4438-8eb7-4357e52d4efb-kube-api-access-77c2g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.981735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.981774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.982290 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:47 crc kubenswrapper[4858]: I1122 07:30:47.982290 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:48 crc kubenswrapper[4858]: I1122 07:30:48.004146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77c2g\" (UniqueName: \"kubernetes.io/projected/3fe936f3-d183-4438-8eb7-4357e52d4efb-kube-api-access-77c2g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:48 crc kubenswrapper[4858]: I1122 07:30:48.126305 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:30:48 crc kubenswrapper[4858]: I1122 07:30:48.301204 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt"] Nov 22 07:30:48 crc kubenswrapper[4858]: I1122 07:30:48.664809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" event={"ID":"3fe936f3-d183-4438-8eb7-4357e52d4efb","Type":"ContainerStarted","Data":"c2bc37dac4a1edbad863ea4441adcf509f94d626065aadc7522ca07f47ea2516"} Nov 22 07:30:49 crc kubenswrapper[4858]: I1122 07:30:49.673178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" event={"ID":"3fe936f3-d183-4438-8eb7-4357e52d4efb","Type":"ContainerStarted","Data":"670109effe52989b9b2e4eda93d7e7e6a5c2d9cf4052274524aa38b8ee392848"} Nov 22 07:30:50 crc kubenswrapper[4858]: I1122 07:30:50.680415 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerID="670109effe52989b9b2e4eda93d7e7e6a5c2d9cf4052274524aa38b8ee392848" exitCode=0 Nov 22 07:30:50 crc kubenswrapper[4858]: I1122 07:30:50.680461 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" event={"ID":"3fe936f3-d183-4438-8eb7-4357e52d4efb","Type":"ContainerDied","Data":"670109effe52989b9b2e4eda93d7e7e6a5c2d9cf4052274524aa38b8ee392848"} Nov 22 07:30:57 crc kubenswrapper[4858]: I1122 07:30:57.719398 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerID="119f05776dd84b81d65eb4b974004e5c230fdcb871201cba39659fadd4818576" exitCode=0 Nov 22 07:30:57 crc kubenswrapper[4858]: I1122 07:30:57.719470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" event={"ID":"3fe936f3-d183-4438-8eb7-4357e52d4efb","Type":"ContainerDied","Data":"119f05776dd84b81d65eb4b974004e5c230fdcb871201cba39659fadd4818576"} Nov 22 07:30:59 crc kubenswrapper[4858]: I1122 07:30:59.735297 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerID="a869563699e655867f80f200f96bd6d96f10eca62958a9c6e0b8c41febdeca79" exitCode=0 Nov 22 07:30:59 crc kubenswrapper[4858]: I1122 07:30:59.735371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" event={"ID":"3fe936f3-d183-4438-8eb7-4357e52d4efb","Type":"ContainerDied","Data":"a869563699e655867f80f200f96bd6d96f10eca62958a9c6e0b8c41febdeca79"} Nov 22 07:31:00 crc kubenswrapper[4858]: I1122 07:31:00.961719 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.045838 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-bundle\") pod \"3fe936f3-d183-4438-8eb7-4357e52d4efb\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.045968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77c2g\" (UniqueName: \"kubernetes.io/projected/3fe936f3-d183-4438-8eb7-4357e52d4efb-kube-api-access-77c2g\") pod \"3fe936f3-d183-4438-8eb7-4357e52d4efb\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.046059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-util\") pod \"3fe936f3-d183-4438-8eb7-4357e52d4efb\" (UID: \"3fe936f3-d183-4438-8eb7-4357e52d4efb\") " Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.046535 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-bundle" (OuterVolumeSpecName: "bundle") pod "3fe936f3-d183-4438-8eb7-4357e52d4efb" (UID: "3fe936f3-d183-4438-8eb7-4357e52d4efb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.053341 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe936f3-d183-4438-8eb7-4357e52d4efb-kube-api-access-77c2g" (OuterVolumeSpecName: "kube-api-access-77c2g") pod "3fe936f3-d183-4438-8eb7-4357e52d4efb" (UID: "3fe936f3-d183-4438-8eb7-4357e52d4efb"). InnerVolumeSpecName "kube-api-access-77c2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.069811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-util" (OuterVolumeSpecName: "util") pod "3fe936f3-d183-4438-8eb7-4357e52d4efb" (UID: "3fe936f3-d183-4438-8eb7-4357e52d4efb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.148013 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.148058 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77c2g\" (UniqueName: \"kubernetes.io/projected/3fe936f3-d183-4438-8eb7-4357e52d4efb-kube-api-access-77c2g\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.148067 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fe936f3-d183-4438-8eb7-4357e52d4efb-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.752250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" event={"ID":"3fe936f3-d183-4438-8eb7-4357e52d4efb","Type":"ContainerDied","Data":"c2bc37dac4a1edbad863ea4441adcf509f94d626065aadc7522ca07f47ea2516"} Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.752303 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bc37dac4a1edbad863ea4441adcf509f94d626065aadc7522ca07f47ea2516" Nov 22 07:31:01 crc kubenswrapper[4858]: I1122 07:31:01.752481 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehj5vt" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.376194 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-g2f66"] Nov 22 07:31:04 crc kubenswrapper[4858]: E1122 07:31:04.376471 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="util" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.376489 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="util" Nov 22 07:31:04 crc kubenswrapper[4858]: E1122 07:31:04.376501 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="extract" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.376509 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="extract" Nov 22 07:31:04 crc kubenswrapper[4858]: E1122 07:31:04.376524 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="pull" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.376531 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="pull" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.376645 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe936f3-d183-4438-8eb7-4357e52d4efb" containerName="extract" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.377381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.381752 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qccbt" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.381782 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.381838 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.394093 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-g2f66"] Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.486891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rzb\" (UniqueName: \"kubernetes.io/projected/99cf40d1-ff7a-4a37-ac78-601bf31a9f94-kube-api-access-s8rzb\") pod \"nmstate-operator-557fdffb88-g2f66\" (UID: \"99cf40d1-ff7a-4a37-ac78-601bf31a9f94\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.587897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8rzb\" (UniqueName: \"kubernetes.io/projected/99cf40d1-ff7a-4a37-ac78-601bf31a9f94-kube-api-access-s8rzb\") pod \"nmstate-operator-557fdffb88-g2f66\" (UID: \"99cf40d1-ff7a-4a37-ac78-601bf31a9f94\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.608177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8rzb\" (UniqueName: \"kubernetes.io/projected/99cf40d1-ff7a-4a37-ac78-601bf31a9f94-kube-api-access-s8rzb\") pod \"nmstate-operator-557fdffb88-g2f66\" (UID: \"99cf40d1-ff7a-4a37-ac78-601bf31a9f94\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.693962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.887067 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-g2f66"] Nov 22 07:31:04 crc kubenswrapper[4858]: I1122 07:31:04.906005 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:31:05 crc kubenswrapper[4858]: I1122 07:31:05.772971 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" event={"ID":"99cf40d1-ff7a-4a37-ac78-601bf31a9f94","Type":"ContainerStarted","Data":"6550213f8954f566eab3070dff6deb7d4575aa0d6f1ac24e10148261da750ada"} Nov 22 07:31:12 crc kubenswrapper[4858]: I1122 07:31:12.808082 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" event={"ID":"99cf40d1-ff7a-4a37-ac78-601bf31a9f94","Type":"ContainerStarted","Data":"bd3e60871d2849310754445bc75cb72a16488bc69343b017f17bf0a5564a4d03"} Nov 22 07:31:12 crc kubenswrapper[4858]: I1122 07:31:12.826142 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-g2f66" podStartSLOduration=2.11178618 podStartE2EDuration="8.826125012s" podCreationTimestamp="2025-11-22 07:31:04 +0000 UTC" firstStartedPulling="2025-11-22 07:31:04.905753576 +0000 UTC m=+1226.747176582" lastFinishedPulling="2025-11-22 07:31:11.620092408 +0000 UTC m=+1233.461515414" observedRunningTime="2025-11-22 07:31:12.823298501 +0000 UTC m=+1234.664721507" watchObservedRunningTime="2025-11-22 07:31:12.826125012 +0000 UTC m=+1234.667548018" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.777266 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx"] Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.778701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.780838 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wkq9d" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.798104 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx"] Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.802165 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5"] Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.803219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.805440 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.820123 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7flvw"] Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.820808 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.846672 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5"] Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-nmstate-lock\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911499 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zrc\" (UniqueName: \"kubernetes.io/projected/324deac9-e38d-4625-8e13-865120d6199b-kube-api-access-96zrc\") pod \"nmstate-metrics-5dcf9c57c5-4vcwx\" (UID: \"324deac9-e38d-4625-8e13-865120d6199b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-ovs-socket\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45fl\" (UniqueName: \"kubernetes.io/projected/88c3a54f-714a-418e-ba1f-31cce9ee3a6b-kube-api-access-f45fl\") pod \"nmstate-webhook-6b89b748d8-k69t5\" (UID: \"88c3a54f-714a-418e-ba1f-31cce9ee3a6b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911633 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r4vf\" (UniqueName: \"kubernetes.io/projected/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-kube-api-access-6r4vf\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c3a54f-714a-418e-ba1f-31cce9ee3a6b-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-k69t5\" (UID: \"88c3a54f-714a-418e-ba1f-31cce9ee3a6b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.911698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-dbus-socket\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.968192 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj"] Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.969061 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.972045 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.972078 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.972078 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qfvvm" Nov 22 07:31:13 crc kubenswrapper[4858]: I1122 07:31:13.982556 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj"] Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013276 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc9c340-3e1a-43a2-9f86-15bb3b283552-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r4vf\" (UniqueName: \"kubernetes.io/projected/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-kube-api-access-6r4vf\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c3a54f-714a-418e-ba1f-31cce9ee3a6b-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-k69t5\" (UID: \"88c3a54f-714a-418e-ba1f-31cce9ee3a6b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-dbus-socket\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-nmstate-lock\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013495 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frcz\" (UniqueName: \"kubernetes.io/projected/bfc9c340-3e1a-43a2-9f86-15bb3b283552-kube-api-access-8frcz\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bfc9c340-3e1a-43a2-9f86-15bb3b283552-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013540 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96zrc\" (UniqueName: \"kubernetes.io/projected/324deac9-e38d-4625-8e13-865120d6199b-kube-api-access-96zrc\") pod \"nmstate-metrics-5dcf9c57c5-4vcwx\" (UID: \"324deac9-e38d-4625-8e13-865120d6199b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-ovs-socket\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f45fl\" (UniqueName: \"kubernetes.io/projected/88c3a54f-714a-418e-ba1f-31cce9ee3a6b-kube-api-access-f45fl\") pod \"nmstate-webhook-6b89b748d8-k69t5\" (UID: \"88c3a54f-714a-418e-ba1f-31cce9ee3a6b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-nmstate-lock\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-ovs-socket\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.013770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-dbus-socket\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.018881 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c3a54f-714a-418e-ba1f-31cce9ee3a6b-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-k69t5\" (UID: \"88c3a54f-714a-418e-ba1f-31cce9ee3a6b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.029214 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f45fl\" (UniqueName: \"kubernetes.io/projected/88c3a54f-714a-418e-ba1f-31cce9ee3a6b-kube-api-access-f45fl\") pod \"nmstate-webhook-6b89b748d8-k69t5\" (UID: \"88c3a54f-714a-418e-ba1f-31cce9ee3a6b\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.030749 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96zrc\" (UniqueName: \"kubernetes.io/projected/324deac9-e38d-4625-8e13-865120d6199b-kube-api-access-96zrc\") pod \"nmstate-metrics-5dcf9c57c5-4vcwx\" (UID: \"324deac9-e38d-4625-8e13-865120d6199b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.037406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r4vf\" (UniqueName: \"kubernetes.io/projected/ea600bc5-9ee7-40e6-a72a-d4414fc10e23-kube-api-access-6r4vf\") pod \"nmstate-handler-7flvw\" (UID: \"ea600bc5-9ee7-40e6-a72a-d4414fc10e23\") " pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.099972 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.115115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc9c340-3e1a-43a2-9f86-15bb3b283552-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.115213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frcz\" (UniqueName: \"kubernetes.io/projected/bfc9c340-3e1a-43a2-9f86-15bb3b283552-kube-api-access-8frcz\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.115237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bfc9c340-3e1a-43a2-9f86-15bb3b283552-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: E1122 07:31:14.115931 4858 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 22 07:31:14 crc kubenswrapper[4858]: E1122 07:31:14.116014 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc9c340-3e1a-43a2-9f86-15bb3b283552-plugin-serving-cert podName:bfc9c340-3e1a-43a2-9f86-15bb3b283552 nodeName:}" failed. No retries permitted until 2025-11-22 07:31:14.61599006 +0000 UTC m=+1236.457413066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bfc9c340-3e1a-43a2-9f86-15bb3b283552-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-djvvj" (UID: "bfc9c340-3e1a-43a2-9f86-15bb3b283552") : secret "plugin-serving-cert" not found Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.116239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bfc9c340-3e1a-43a2-9f86-15bb3b283552-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.128636 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.139463 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frcz\" (UniqueName: \"kubernetes.io/projected/bfc9c340-3e1a-43a2-9f86-15bb3b283552-kube-api-access-8frcz\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.141931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.143106 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68786f6844-2c8cn"] Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.143851 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.180489 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68786f6844-2c8cn"] Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-service-ca\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-oauth-serving-cert\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-trusted-ca-bundle\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-oauth-config\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219234 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnvd\" (UniqueName: \"kubernetes.io/projected/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-kube-api-access-xcnvd\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-config\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.219283 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-serving-cert\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.322674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-oauth-serving-cert\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.323154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-trusted-ca-bundle\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.323265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-oauth-config\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.323297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcnvd\" (UniqueName: \"kubernetes.io/projected/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-kube-api-access-xcnvd\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.325115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-config\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.325167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-serving-cert\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.325216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-service-ca\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.326211 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-service-ca\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.326485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-oauth-serving-cert\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.326805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-trusted-ca-bundle\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.327240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-config\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.333243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-oauth-config\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.340657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-console-serving-cert\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.352840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcnvd\" (UniqueName: \"kubernetes.io/projected/e2e6266c-268c-4f5a-950b-ffd5fdcc28b0-kube-api-access-xcnvd\") pod \"console-68786f6844-2c8cn\" (UID: \"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0\") " pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.437361 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx"] Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.488676 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.493507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5"] Nov 22 07:31:14 crc kubenswrapper[4858]: W1122 07:31:14.495162 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c3a54f_714a_418e_ba1f_31cce9ee3a6b.slice/crio-bdee12b2b566dcd940cb11da1a285babe0a73a452e85ff465aabf25f1020bd02 WatchSource:0}: Error finding container bdee12b2b566dcd940cb11da1a285babe0a73a452e85ff465aabf25f1020bd02: Status 404 returned error can't find the container with id bdee12b2b566dcd940cb11da1a285babe0a73a452e85ff465aabf25f1020bd02 Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.631527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc9c340-3e1a-43a2-9f86-15bb3b283552-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.634933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc9c340-3e1a-43a2-9f86-15bb3b283552-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-djvvj\" (UID: \"bfc9c340-3e1a-43a2-9f86-15bb3b283552\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.661904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68786f6844-2c8cn"] Nov 22 07:31:14 crc kubenswrapper[4858]: W1122 07:31:14.663848 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e6266c_268c_4f5a_950b_ffd5fdcc28b0.slice/crio-770a9b22bb5641a3147e9d6790895875c05ca6c6de1cd62ebd053bde19227b4f WatchSource:0}: Error finding container 770a9b22bb5641a3147e9d6790895875c05ca6c6de1cd62ebd053bde19227b4f: Status 404 returned error can't find the container with id 770a9b22bb5641a3147e9d6790895875c05ca6c6de1cd62ebd053bde19227b4f Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.822061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" event={"ID":"88c3a54f-714a-418e-ba1f-31cce9ee3a6b","Type":"ContainerStarted","Data":"bdee12b2b566dcd940cb11da1a285babe0a73a452e85ff465aabf25f1020bd02"} Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.823586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" event={"ID":"324deac9-e38d-4625-8e13-865120d6199b","Type":"ContainerStarted","Data":"e3436ee7f3cb9b731362f3e5d13842afe18b35a97af579d82a32bda95ce8d6db"} Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.824914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68786f6844-2c8cn" event={"ID":"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0","Type":"ContainerStarted","Data":"770a9b22bb5641a3147e9d6790895875c05ca6c6de1cd62ebd053bde19227b4f"} Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.825948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7flvw" event={"ID":"ea600bc5-9ee7-40e6-a72a-d4414fc10e23","Type":"ContainerStarted","Data":"6f0130cdced29a2cd6e744902471f51a4729cc9c63e39e717495d5a2842e8b90"} Nov 22 07:31:14 crc kubenswrapper[4858]: I1122 07:31:14.884296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.096821 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj"] Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.312712 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.312781 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.312833 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.313424 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f316aa9fc732e6e2efa18f2b02b78cbee761cabcd6a33c8efb9930c2da311b8"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.313494 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://7f316aa9fc732e6e2efa18f2b02b78cbee761cabcd6a33c8efb9930c2da311b8" gracePeriod=600 Nov 22 07:31:15 crc kubenswrapper[4858]: I1122 07:31:15.833865 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" event={"ID":"bfc9c340-3e1a-43a2-9f86-15bb3b283552","Type":"ContainerStarted","Data":"fd41a0bf78bd3eddbd3459ccff1ec510dcabb19de0621e9c4f28febc34def521"} Nov 22 07:31:16 crc kubenswrapper[4858]: I1122 07:31:16.841027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68786f6844-2c8cn" event={"ID":"e2e6266c-268c-4f5a-950b-ffd5fdcc28b0","Type":"ContainerStarted","Data":"08a66e8df92d2e70e5753f79a070e0c7829470b0e2c081a5a0e1211dcf36a665"} Nov 22 07:31:17 crc kubenswrapper[4858]: I1122 07:31:17.849586 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="7f316aa9fc732e6e2efa18f2b02b78cbee761cabcd6a33c8efb9930c2da311b8" exitCode=0 Nov 22 07:31:17 crc kubenswrapper[4858]: I1122 07:31:17.849649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"7f316aa9fc732e6e2efa18f2b02b78cbee761cabcd6a33c8efb9930c2da311b8"} Nov 22 07:31:17 crc kubenswrapper[4858]: I1122 07:31:17.849730 4858 scope.go:117] "RemoveContainer" containerID="ac027e3d7c7e0a4005b468f1a22c01f947af9a1def6d54df36df7ebb83715efb" Nov 22 07:31:17 crc kubenswrapper[4858]: I1122 07:31:17.874948 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68786f6844-2c8cn" podStartSLOduration=3.874920871 podStartE2EDuration="3.874920871s" podCreationTimestamp="2025-11-22 07:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:31:17.865374086 +0000 UTC m=+1239.706797112" watchObservedRunningTime="2025-11-22 07:31:17.874920871 +0000 UTC m=+1239.716343897" Nov 22 07:31:18 crc kubenswrapper[4858]: I1122 07:31:18.858041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"fe678c03c05ee8081bf195d77b88472f1f4c9e342fe01dac378eda1f29d2452e"} Nov 22 07:31:24 crc kubenswrapper[4858]: I1122 07:31:24.489868 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:24 crc kubenswrapper[4858]: I1122 07:31:24.491790 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:24 crc kubenswrapper[4858]: I1122 07:31:24.494651 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:24 crc kubenswrapper[4858]: I1122 07:31:24.901434 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68786f6844-2c8cn" Nov 22 07:31:24 crc kubenswrapper[4858]: I1122 07:31:24.947558 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gtcln"] Nov 22 07:31:26 crc kubenswrapper[4858]: I1122 07:31:26.912541 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" event={"ID":"88c3a54f-714a-418e-ba1f-31cce9ee3a6b","Type":"ContainerStarted","Data":"5cbe9ef799aeb8eb207905ae6b8d09003f3265981f83c1f04a936afac38f2a43"} Nov 22 07:31:26 crc kubenswrapper[4858]: I1122 07:31:26.914530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" event={"ID":"324deac9-e38d-4625-8e13-865120d6199b","Type":"ContainerStarted","Data":"68238e2511ad1d13ecdfe5c68cecd865a41b5c9c6f0ca949e60cba104be6bc0c"} Nov 22 07:31:27 crc kubenswrapper[4858]: I1122 07:31:27.921678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7flvw" event={"ID":"ea600bc5-9ee7-40e6-a72a-d4414fc10e23","Type":"ContainerStarted","Data":"6e3e0797c6991678ca552d5e2c382ecf75a196a55323be5fd49af8b36354102f"} Nov 22 07:31:27 crc kubenswrapper[4858]: I1122 07:31:27.923215 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:27 crc kubenswrapper[4858]: I1122 07:31:27.923234 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:27 crc kubenswrapper[4858]: I1122 07:31:27.947867 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" podStartSLOduration=3.139812064 podStartE2EDuration="14.947841612s" podCreationTimestamp="2025-11-22 07:31:13 +0000 UTC" firstStartedPulling="2025-11-22 07:31:14.497922992 +0000 UTC m=+1236.339345998" lastFinishedPulling="2025-11-22 07:31:26.30595254 +0000 UTC m=+1248.147375546" observedRunningTime="2025-11-22 07:31:27.94026629 +0000 UTC m=+1249.781689296" watchObservedRunningTime="2025-11-22 07:31:27.947841612 +0000 UTC m=+1249.789264618" Nov 22 07:31:27 crc kubenswrapper[4858]: I1122 07:31:27.962701 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7flvw" podStartSLOduration=2.996075012 podStartE2EDuration="14.962678448s" podCreationTimestamp="2025-11-22 07:31:13 +0000 UTC" firstStartedPulling="2025-11-22 07:31:14.231837501 +0000 UTC m=+1236.073260507" lastFinishedPulling="2025-11-22 07:31:26.198440937 +0000 UTC m=+1248.039863943" observedRunningTime="2025-11-22 07:31:27.959465615 +0000 UTC m=+1249.800888631" watchObservedRunningTime="2025-11-22 07:31:27.962678448 +0000 UTC m=+1249.804101454" Nov 22 07:31:28 crc kubenswrapper[4858]: I1122 07:31:28.935483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" event={"ID":"bfc9c340-3e1a-43a2-9f86-15bb3b283552","Type":"ContainerStarted","Data":"ccc76ecd5251462e5997f9d09fad781927e57f1704d0cac51b0adaf4df6647cc"} Nov 22 07:31:28 crc kubenswrapper[4858]: I1122 07:31:28.963550 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-djvvj" podStartSLOduration=2.968551009 podStartE2EDuration="15.963527831s" podCreationTimestamp="2025-11-22 07:31:13 +0000 UTC" firstStartedPulling="2025-11-22 07:31:15.109391864 +0000 UTC m=+1236.950814870" lastFinishedPulling="2025-11-22 07:31:28.104368686 +0000 UTC m=+1249.945791692" observedRunningTime="2025-11-22 07:31:28.951104693 +0000 UTC m=+1250.792527699" watchObservedRunningTime="2025-11-22 07:31:28.963527831 +0000 UTC m=+1250.804950837" Nov 22 07:31:31 crc kubenswrapper[4858]: I1122 07:31:31.955935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" event={"ID":"324deac9-e38d-4625-8e13-865120d6199b","Type":"ContainerStarted","Data":"4232e889080fa464586da390bd596dc0fb775a2138200733be80c65ca007b7ab"} Nov 22 07:31:31 crc kubenswrapper[4858]: I1122 07:31:31.976044 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4vcwx" podStartSLOduration=2.229625857 podStartE2EDuration="18.976025987s" podCreationTimestamp="2025-11-22 07:31:13 +0000 UTC" firstStartedPulling="2025-11-22 07:31:14.444012496 +0000 UTC m=+1236.285435502" lastFinishedPulling="2025-11-22 07:31:31.190412626 +0000 UTC m=+1253.031835632" observedRunningTime="2025-11-22 07:31:31.973895998 +0000 UTC m=+1253.815319004" watchObservedRunningTime="2025-11-22 07:31:31.976025987 +0000 UTC m=+1253.817448993" Nov 22 07:31:34 crc kubenswrapper[4858]: I1122 07:31:34.164732 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7flvw" Nov 22 07:31:44 crc kubenswrapper[4858]: I1122 07:31:44.134521 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-k69t5" Nov 22 07:31:49 crc kubenswrapper[4858]: I1122 07:31:49.993662 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gtcln" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerName="console" containerID="cri-o://095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223" gracePeriod=15 Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.651232 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gtcln_6af73c1f-5d33-4e17-8331-61cf5b084487/console/0.log" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.651483 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.756806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-trusted-ca-bundle\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.756885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-console-config\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.756967 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-oauth-config\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-service-ca\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-serving-cert\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxg77\" (UniqueName: \"kubernetes.io/projected/6af73c1f-5d33-4e17-8331-61cf5b084487-kube-api-access-rxg77\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-oauth-serving-cert\") pod \"6af73c1f-5d33-4e17-8331-61cf5b084487\" (UID: \"6af73c1f-5d33-4e17-8331-61cf5b084487\") " Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757916 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-console-config" (OuterVolumeSpecName: "console-config") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-service-ca" (OuterVolumeSpecName: "service-ca") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.757974 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.763250 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af73c1f-5d33-4e17-8331-61cf5b084487-kube-api-access-rxg77" (OuterVolumeSpecName: "kube-api-access-rxg77") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "kube-api-access-rxg77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.764672 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.777194 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6af73c1f-5d33-4e17-8331-61cf5b084487" (UID: "6af73c1f-5d33-4e17-8331-61cf5b084487"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858481 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858524 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858532 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858543 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858552 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6af73c1f-5d33-4e17-8331-61cf5b084487-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858560 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6af73c1f-5d33-4e17-8331-61cf5b084487-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:51 crc kubenswrapper[4858]: I1122 07:31:51.858568 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxg77\" (UniqueName: \"kubernetes.io/projected/6af73c1f-5d33-4e17-8331-61cf5b084487-kube-api-access-rxg77\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.063757 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gtcln_6af73c1f-5d33-4e17-8331-61cf5b084487/console/0.log" Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.063808 4858 generic.go:334] "Generic (PLEG): container finished" podID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerID="095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223" exitCode=2 Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.063847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gtcln" event={"ID":"6af73c1f-5d33-4e17-8331-61cf5b084487","Type":"ContainerDied","Data":"095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223"} Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.063882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gtcln" event={"ID":"6af73c1f-5d33-4e17-8331-61cf5b084487","Type":"ContainerDied","Data":"52e05b837c3954dc482fd1ce004877735a867a11e7c546d2601e66993682c3d4"} Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.063902 4858 scope.go:117] "RemoveContainer" containerID="095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223" Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.064042 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gtcln" Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.120993 4858 scope.go:117] "RemoveContainer" containerID="095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223" Nov 22 07:31:52 crc kubenswrapper[4858]: E1122 07:31:52.124745 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223\": container with ID starting with 095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223 not found: ID does not exist" containerID="095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223" Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.124791 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223"} err="failed to get container status \"095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223\": rpc error: code = NotFound desc = could not find container \"095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223\": container with ID starting with 095a123d8c0bc07039980eaca8b08787b6951a4a7f1f9ca725b44367013bc223 not found: ID does not exist" Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.127345 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gtcln"] Nov 22 07:31:52 crc kubenswrapper[4858]: I1122 07:31:52.155622 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gtcln"] Nov 22 07:31:53 crc kubenswrapper[4858]: I1122 07:31:53.542522 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" path="/var/lib/kubelet/pods/6af73c1f-5d33-4e17-8331-61cf5b084487/volumes" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.253383 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q"] Nov 22 07:31:59 crc kubenswrapper[4858]: E1122 07:31:59.254374 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerName="console" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.254389 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerName="console" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.254497 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af73c1f-5d33-4e17-8331-61cf5b084487" containerName="console" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.255211 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.258367 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.266190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q"] Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.274968 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.275023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgbfp\" (UniqueName: \"kubernetes.io/projected/fd10936e-f961-492e-bd73-3488a8814ddb-kube-api-access-jgbfp\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.275047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.376122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.376187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgbfp\" (UniqueName: \"kubernetes.io/projected/fd10936e-f961-492e-bd73-3488a8814ddb-kube-api-access-jgbfp\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.376217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.376698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.376731 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.395673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgbfp\" (UniqueName: \"kubernetes.io/projected/fd10936e-f961-492e-bd73-3488a8814ddb-kube-api-access-jgbfp\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.584021 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:31:59 crc kubenswrapper[4858]: I1122 07:31:59.784563 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q"] Nov 22 07:32:00 crc kubenswrapper[4858]: I1122 07:32:00.112501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" event={"ID":"fd10936e-f961-492e-bd73-3488a8814ddb","Type":"ContainerStarted","Data":"675bc40d63dfce2f8e61da05938e56574cc771977ea86d2f9ae25b45e46e77cb"} Nov 22 07:32:01 crc kubenswrapper[4858]: I1122 07:32:01.118936 4858 generic.go:334] "Generic (PLEG): container finished" podID="fd10936e-f961-492e-bd73-3488a8814ddb" containerID="404ef3b1c577d8edc74e0fed6eea318fa6d231464cecd6d45abf9af15509243c" exitCode=0 Nov 22 07:32:01 crc kubenswrapper[4858]: I1122 07:32:01.118989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" event={"ID":"fd10936e-f961-492e-bd73-3488a8814ddb","Type":"ContainerDied","Data":"404ef3b1c577d8edc74e0fed6eea318fa6d231464cecd6d45abf9af15509243c"} Nov 22 07:32:08 crc kubenswrapper[4858]: I1122 07:32:08.163301 4858 generic.go:334] "Generic (PLEG): container finished" podID="fd10936e-f961-492e-bd73-3488a8814ddb" containerID="99641427eeb65253914d82a97bf941f814ffbff5b0ee474acfba3d9647855543" exitCode=0 Nov 22 07:32:08 crc kubenswrapper[4858]: I1122 07:32:08.163415 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" event={"ID":"fd10936e-f961-492e-bd73-3488a8814ddb","Type":"ContainerDied","Data":"99641427eeb65253914d82a97bf941f814ffbff5b0ee474acfba3d9647855543"} Nov 22 07:32:09 crc kubenswrapper[4858]: I1122 07:32:09.170127 4858 generic.go:334] "Generic (PLEG): container finished" podID="fd10936e-f961-492e-bd73-3488a8814ddb" containerID="1027f97d02f2ee99d6da93cceb8f8682636537278fca8061d75e83a483743cf0" exitCode=0 Nov 22 07:32:09 crc kubenswrapper[4858]: I1122 07:32:09.170170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" event={"ID":"fd10936e-f961-492e-bd73-3488a8814ddb","Type":"ContainerDied","Data":"1027f97d02f2ee99d6da93cceb8f8682636537278fca8061d75e83a483743cf0"} Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.411404 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.546652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgbfp\" (UniqueName: \"kubernetes.io/projected/fd10936e-f961-492e-bd73-3488a8814ddb-kube-api-access-jgbfp\") pod \"fd10936e-f961-492e-bd73-3488a8814ddb\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.546825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-util\") pod \"fd10936e-f961-492e-bd73-3488a8814ddb\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.546843 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-bundle\") pod \"fd10936e-f961-492e-bd73-3488a8814ddb\" (UID: \"fd10936e-f961-492e-bd73-3488a8814ddb\") " Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.547732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-bundle" (OuterVolumeSpecName: "bundle") pod "fd10936e-f961-492e-bd73-3488a8814ddb" (UID: "fd10936e-f961-492e-bd73-3488a8814ddb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.552605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd10936e-f961-492e-bd73-3488a8814ddb-kube-api-access-jgbfp" (OuterVolumeSpecName: "kube-api-access-jgbfp") pod "fd10936e-f961-492e-bd73-3488a8814ddb" (UID: "fd10936e-f961-492e-bd73-3488a8814ddb"). InnerVolumeSpecName "kube-api-access-jgbfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.557884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-util" (OuterVolumeSpecName: "util") pod "fd10936e-f961-492e-bd73-3488a8814ddb" (UID: "fd10936e-f961-492e-bd73-3488a8814ddb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.649664 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.649706 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd10936e-f961-492e-bd73-3488a8814ddb-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:10 crc kubenswrapper[4858]: I1122 07:32:10.649718 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgbfp\" (UniqueName: \"kubernetes.io/projected/fd10936e-f961-492e-bd73-3488a8814ddb-kube-api-access-jgbfp\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:11 crc kubenswrapper[4858]: I1122 07:32:11.184183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" event={"ID":"fd10936e-f961-492e-bd73-3488a8814ddb","Type":"ContainerDied","Data":"675bc40d63dfce2f8e61da05938e56574cc771977ea86d2f9ae25b45e46e77cb"} Nov 22 07:32:11 crc kubenswrapper[4858]: I1122 07:32:11.184232 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675bc40d63dfce2f8e61da05938e56574cc771977ea86d2f9ae25b45e46e77cb" Nov 22 07:32:11 crc kubenswrapper[4858]: I1122 07:32:11.184247 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c5l2q" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.353942 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9"] Nov 22 07:32:22 crc kubenswrapper[4858]: E1122 07:32:22.354660 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="pull" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.354671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="pull" Nov 22 07:32:22 crc kubenswrapper[4858]: E1122 07:32:22.354688 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="extract" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.354695 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="extract" Nov 22 07:32:22 crc kubenswrapper[4858]: E1122 07:32:22.354711 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="util" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.354718 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="util" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.354832 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd10936e-f961-492e-bd73-3488a8814ddb" containerName="extract" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.355203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.357839 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.357950 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-56ssr" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.357991 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.358219 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.358572 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.374455 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9"] Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.503361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1caab3b-8979-4213-b5d4-b7ae970d6879-apiservice-cert\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.503701 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1caab3b-8979-4213-b5d4-b7ae970d6879-webhook-cert\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.503890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t45z\" (UniqueName: \"kubernetes.io/projected/c1caab3b-8979-4213-b5d4-b7ae970d6879-kube-api-access-8t45z\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.604861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t45z\" (UniqueName: \"kubernetes.io/projected/c1caab3b-8979-4213-b5d4-b7ae970d6879-kube-api-access-8t45z\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.605227 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb"] Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.605254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1caab3b-8979-4213-b5d4-b7ae970d6879-apiservice-cert\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.605553 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1caab3b-8979-4213-b5d4-b7ae970d6879-webhook-cert\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.606659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.611150 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.611374 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8prwj" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.611414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.616407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1caab3b-8979-4213-b5d4-b7ae970d6879-apiservice-cert\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.623572 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb"] Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.624448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1caab3b-8979-4213-b5d4-b7ae970d6879-webhook-cert\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.633634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t45z\" (UniqueName: \"kubernetes.io/projected/c1caab3b-8979-4213-b5d4-b7ae970d6879-kube-api-access-8t45z\") pod \"metallb-operator-controller-manager-548b5c5578-8hjk9\" (UID: \"c1caab3b-8979-4213-b5d4-b7ae970d6879\") " pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.672696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.707412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcsr\" (UniqueName: \"kubernetes.io/projected/42b0be39-5e2c-4396-aadc-c2680704c4ab-kube-api-access-2mcsr\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.707457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b0be39-5e2c-4396-aadc-c2680704c4ab-apiservice-cert\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.707483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b0be39-5e2c-4396-aadc-c2680704c4ab-webhook-cert\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.809180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcsr\" (UniqueName: \"kubernetes.io/projected/42b0be39-5e2c-4396-aadc-c2680704c4ab-kube-api-access-2mcsr\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.809246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b0be39-5e2c-4396-aadc-c2680704c4ab-apiservice-cert\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.809275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b0be39-5e2c-4396-aadc-c2680704c4ab-webhook-cert\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.813088 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b0be39-5e2c-4396-aadc-c2680704c4ab-apiservice-cert\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.832068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b0be39-5e2c-4396-aadc-c2680704c4ab-webhook-cert\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.834072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcsr\" (UniqueName: \"kubernetes.io/projected/42b0be39-5e2c-4396-aadc-c2680704c4ab-kube-api-access-2mcsr\") pod \"metallb-operator-webhook-server-68c788c74b-8bjhb\" (UID: \"42b0be39-5e2c-4396-aadc-c2680704c4ab\") " pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:22 crc kubenswrapper[4858]: I1122 07:32:22.967700 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:23 crc kubenswrapper[4858]: I1122 07:32:23.075313 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9"] Nov 22 07:32:23 crc kubenswrapper[4858]: W1122 07:32:23.092932 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1caab3b_8979_4213_b5d4_b7ae970d6879.slice/crio-316b7d1d00452817f91c54581bbde6ccc35b8d329cfbf1c1636a5c7916c1e50c WatchSource:0}: Error finding container 316b7d1d00452817f91c54581bbde6ccc35b8d329cfbf1c1636a5c7916c1e50c: Status 404 returned error can't find the container with id 316b7d1d00452817f91c54581bbde6ccc35b8d329cfbf1c1636a5c7916c1e50c Nov 22 07:32:23 crc kubenswrapper[4858]: I1122 07:32:23.258607 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" event={"ID":"c1caab3b-8979-4213-b5d4-b7ae970d6879","Type":"ContainerStarted","Data":"316b7d1d00452817f91c54581bbde6ccc35b8d329cfbf1c1636a5c7916c1e50c"} Nov 22 07:32:23 crc kubenswrapper[4858]: I1122 07:32:23.362941 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb"] Nov 22 07:32:23 crc kubenswrapper[4858]: W1122 07:32:23.372339 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42b0be39_5e2c_4396_aadc_c2680704c4ab.slice/crio-e057d11de2a5fe96b2ee3f6f237ec2e4d7acad3aaeb2add1b4ae4cbc6db07e23 WatchSource:0}: Error finding container e057d11de2a5fe96b2ee3f6f237ec2e4d7acad3aaeb2add1b4ae4cbc6db07e23: Status 404 returned error can't find the container with id e057d11de2a5fe96b2ee3f6f237ec2e4d7acad3aaeb2add1b4ae4cbc6db07e23 Nov 22 07:32:24 crc kubenswrapper[4858]: I1122 07:32:24.264961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" event={"ID":"42b0be39-5e2c-4396-aadc-c2680704c4ab","Type":"ContainerStarted","Data":"e057d11de2a5fe96b2ee3f6f237ec2e4d7acad3aaeb2add1b4ae4cbc6db07e23"} Nov 22 07:32:34 crc kubenswrapper[4858]: I1122 07:32:34.335555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" event={"ID":"c1caab3b-8979-4213-b5d4-b7ae970d6879","Type":"ContainerStarted","Data":"a207481dbcf5533426955870e7f8b0461d018f066c7f645eac371fe7ffb92079"} Nov 22 07:32:35 crc kubenswrapper[4858]: I1122 07:32:35.342275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" event={"ID":"42b0be39-5e2c-4396-aadc-c2680704c4ab","Type":"ContainerStarted","Data":"8e28adf17010473acb1803c669a79530e0f2d5415d604518bc9be03063868ba7"} Nov 22 07:32:35 crc kubenswrapper[4858]: I1122 07:32:35.342368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:32:35 crc kubenswrapper[4858]: I1122 07:32:35.342945 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:32:35 crc kubenswrapper[4858]: I1122 07:32:35.362291 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" podStartSLOduration=2.648572654 podStartE2EDuration="13.362270974s" podCreationTimestamp="2025-11-22 07:32:22 +0000 UTC" firstStartedPulling="2025-11-22 07:32:23.096465608 +0000 UTC m=+1304.937888614" lastFinishedPulling="2025-11-22 07:32:33.810163928 +0000 UTC m=+1315.651586934" observedRunningTime="2025-11-22 07:32:35.359913749 +0000 UTC m=+1317.201336775" watchObservedRunningTime="2025-11-22 07:32:35.362270974 +0000 UTC m=+1317.203693980" Nov 22 07:32:35 crc kubenswrapper[4858]: I1122 07:32:35.385750 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" podStartSLOduration=2.920181874 podStartE2EDuration="13.385728466s" podCreationTimestamp="2025-11-22 07:32:22 +0000 UTC" firstStartedPulling="2025-11-22 07:32:23.375907857 +0000 UTC m=+1305.217330873" lastFinishedPulling="2025-11-22 07:32:33.841454459 +0000 UTC m=+1315.682877465" observedRunningTime="2025-11-22 07:32:35.381549661 +0000 UTC m=+1317.222972677" watchObservedRunningTime="2025-11-22 07:32:35.385728466 +0000 UTC m=+1317.227151472" Nov 22 07:32:52 crc kubenswrapper[4858]: I1122 07:32:52.997184 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-68c788c74b-8bjhb" Nov 22 07:33:12 crc kubenswrapper[4858]: I1122 07:33:12.675358 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-548b5c5578-8hjk9" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.321332 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-7s25x"] Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.322424 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.325241 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.326254 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dmxzv" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.329085 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jpxrl"] Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.332026 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.335879 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-7s25x"] Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.338412 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.338430 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.402857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cee9756d-f6aa-49a8-8e38-ccef99e20e55-cert\") pod \"frr-k8s-webhook-server-6998585d5-7s25x\" (UID: \"cee9756d-f6aa-49a8-8e38-ccef99e20e55\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.402920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thz2n\" (UniqueName: \"kubernetes.io/projected/cee9756d-f6aa-49a8-8e38-ccef99e20e55-kube-api-access-thz2n\") pod \"frr-k8s-webhook-server-6998585d5-7s25x\" (UID: \"cee9756d-f6aa-49a8-8e38-ccef99e20e55\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.404621 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-q9l67"] Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.405725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.409920 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.410514 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.410760 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9sjxd" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.410827 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.417883 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-mhv5p"] Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.418743 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.424567 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.433574 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-mhv5p"] Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7kq\" (UniqueName: \"kubernetes.io/projected/773050fd-b4f6-4413-89d3-f65533c3ee59-kube-api-access-4c7kq\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cee9756d-f6aa-49a8-8e38-ccef99e20e55-cert\") pod \"frr-k8s-webhook-server-6998585d5-7s25x\" (UID: \"cee9756d-f6aa-49a8-8e38-ccef99e20e55\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504280 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504345 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thz2n\" (UniqueName: \"kubernetes.io/projected/cee9756d-f6aa-49a8-8e38-ccef99e20e55-kube-api-access-thz2n\") pod \"frr-k8s-webhook-server-6998585d5-7s25x\" (UID: \"cee9756d-f6aa-49a8-8e38-ccef99e20e55\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/86e5263b-5daf-410c-8661-068c33c68f38-frr-startup\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76n5\" (UniqueName: \"kubernetes.io/projected/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-kube-api-access-t76n5\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86e5263b-5daf-410c-8661-068c33c68f38-metrics-certs\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-frr-conf\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504469 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/773050fd-b4f6-4413-89d3-f65533c3ee59-cert\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504503 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-metrics\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/773050fd-b4f6-4413-89d3-f65533c3ee59-metrics-certs\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-reloader\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metallb-excludel2\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metrics-certs\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jptlp\" (UniqueName: \"kubernetes.io/projected/86e5263b-5daf-410c-8661-068c33c68f38-kube-api-access-jptlp\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.504716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-frr-sockets\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.510152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cee9756d-f6aa-49a8-8e38-ccef99e20e55-cert\") pod \"frr-k8s-webhook-server-6998585d5-7s25x\" (UID: \"cee9756d-f6aa-49a8-8e38-ccef99e20e55\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.523833 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thz2n\" (UniqueName: \"kubernetes.io/projected/cee9756d-f6aa-49a8-8e38-ccef99e20e55-kube-api-access-thz2n\") pod \"frr-k8s-webhook-server-6998585d5-7s25x\" (UID: \"cee9756d-f6aa-49a8-8e38-ccef99e20e55\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.605869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-metrics\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.605923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/773050fd-b4f6-4413-89d3-f65533c3ee59-metrics-certs\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.605967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-reloader\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.605989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metallb-excludel2\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metrics-certs\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jptlp\" (UniqueName: \"kubernetes.io/projected/86e5263b-5daf-410c-8661-068c33c68f38-kube-api-access-jptlp\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-frr-sockets\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7kq\" (UniqueName: \"kubernetes.io/projected/773050fd-b4f6-4413-89d3-f65533c3ee59-kube-api-access-4c7kq\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/86e5263b-5daf-410c-8661-068c33c68f38-frr-startup\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76n5\" (UniqueName: \"kubernetes.io/projected/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-kube-api-access-t76n5\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86e5263b-5daf-410c-8661-068c33c68f38-metrics-certs\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-frr-conf\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.606377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/773050fd-b4f6-4413-89d3-f65533c3ee59-cert\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.607047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-metrics\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: E1122 07:33:13.607805 4858 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 22 07:33:13 crc kubenswrapper[4858]: E1122 07:33:13.607925 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metrics-certs podName:cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5 nodeName:}" failed. No retries permitted until 2025-11-22 07:33:14.107889459 +0000 UTC m=+1355.949312465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metrics-certs") pod "speaker-q9l67" (UID: "cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5") : secret "speaker-certs-secret" not found Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.608221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-frr-conf\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: E1122 07:33:13.608269 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:33:13 crc kubenswrapper[4858]: E1122 07:33:13.608341 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist podName:cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5 nodeName:}" failed. No retries permitted until 2025-11-22 07:33:14.108300632 +0000 UTC m=+1355.949723708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist") pod "speaker-q9l67" (UID: "cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5") : secret "metallb-memberlist" not found Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.608474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-reloader\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.608496 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/86e5263b-5daf-410c-8661-068c33c68f38-frr-sockets\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.609199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/86e5263b-5daf-410c-8661-068c33c68f38-frr-startup\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.609243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metallb-excludel2\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.610018 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.611456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/773050fd-b4f6-4413-89d3-f65533c3ee59-metrics-certs\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.615816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86e5263b-5daf-410c-8661-068c33c68f38-metrics-certs\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.620117 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/773050fd-b4f6-4413-89d3-f65533c3ee59-cert\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.622666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7kq\" (UniqueName: \"kubernetes.io/projected/773050fd-b4f6-4413-89d3-f65533c3ee59-kube-api-access-4c7kq\") pod \"controller-6c7b4b5f48-mhv5p\" (UID: \"773050fd-b4f6-4413-89d3-f65533c3ee59\") " pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.624636 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76n5\" (UniqueName: \"kubernetes.io/projected/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-kube-api-access-t76n5\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.625230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jptlp\" (UniqueName: \"kubernetes.io/projected/86e5263b-5daf-410c-8661-068c33c68f38-kube-api-access-jptlp\") pod \"frr-k8s-jpxrl\" (UID: \"86e5263b-5daf-410c-8661-068c33c68f38\") " pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.645303 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.652639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.740575 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:13 crc kubenswrapper[4858]: I1122 07:33:13.866440 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-7s25x"] Nov 22 07:33:13 crc kubenswrapper[4858]: W1122 07:33:13.910119 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcee9756d_f6aa_49a8_8e38_ccef99e20e55.slice/crio-77c1ab25beef04ae37cb4e7d697d8cbe9a246cae11b5e87ebb7ae55f7142c2c0 WatchSource:0}: Error finding container 77c1ab25beef04ae37cb4e7d697d8cbe9a246cae11b5e87ebb7ae55f7142c2c0: Status 404 returned error can't find the container with id 77c1ab25beef04ae37cb4e7d697d8cbe9a246cae11b5e87ebb7ae55f7142c2c0 Nov 22 07:33:14 crc kubenswrapper[4858]: I1122 07:33:14.111850 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-mhv5p"] Nov 22 07:33:14 crc kubenswrapper[4858]: I1122 07:33:14.113198 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:14 crc kubenswrapper[4858]: I1122 07:33:14.113306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metrics-certs\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:14 crc kubenswrapper[4858]: E1122 07:33:14.113461 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:33:14 crc kubenswrapper[4858]: E1122 07:33:14.113595 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist podName:cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5 nodeName:}" failed. No retries permitted until 2025-11-22 07:33:15.113566283 +0000 UTC m=+1356.954989469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist") pod "speaker-q9l67" (UID: "cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5") : secret "metallb-memberlist" not found Nov 22 07:33:14 crc kubenswrapper[4858]: W1122 07:33:14.120140 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773050fd_b4f6_4413_89d3_f65533c3ee59.slice/crio-652e790d4c506d4b82261cf02b0b65682d7f03c163140774844095715ea7800c WatchSource:0}: Error finding container 652e790d4c506d4b82261cf02b0b65682d7f03c163140774844095715ea7800c: Status 404 returned error can't find the container with id 652e790d4c506d4b82261cf02b0b65682d7f03c163140774844095715ea7800c Nov 22 07:33:14 crc kubenswrapper[4858]: I1122 07:33:14.121189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-metrics-certs\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:14 crc kubenswrapper[4858]: I1122 07:33:14.554609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" event={"ID":"cee9756d-f6aa-49a8-8e38-ccef99e20e55","Type":"ContainerStarted","Data":"77c1ab25beef04ae37cb4e7d697d8cbe9a246cae11b5e87ebb7ae55f7142c2c0"} Nov 22 07:33:14 crc kubenswrapper[4858]: I1122 07:33:14.556001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-mhv5p" event={"ID":"773050fd-b4f6-4413-89d3-f65533c3ee59","Type":"ContainerStarted","Data":"652e790d4c506d4b82261cf02b0b65682d7f03c163140774844095715ea7800c"} Nov 22 07:33:15 crc kubenswrapper[4858]: I1122 07:33:15.126783 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:15 crc kubenswrapper[4858]: E1122 07:33:15.126974 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:33:15 crc kubenswrapper[4858]: E1122 07:33:15.127208 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist podName:cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5 nodeName:}" failed. No retries permitted until 2025-11-22 07:33:17.127193664 +0000 UTC m=+1358.968616670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist") pod "speaker-q9l67" (UID: "cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5") : secret "metallb-memberlist" not found Nov 22 07:33:15 crc kubenswrapper[4858]: I1122 07:33:15.571144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"39e2be148ff7bd5af78b6a7696d0b18ec8f2a7538f3b27c8a21347dc3358fd38"} Nov 22 07:33:15 crc kubenswrapper[4858]: I1122 07:33:15.572907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-mhv5p" event={"ID":"773050fd-b4f6-4413-89d3-f65533c3ee59","Type":"ContainerStarted","Data":"7eaeff7b00b2f8dbf0044e5670011509c0ac6b94de1dc3f2f6f2eda6ea51c306"} Nov 22 07:33:16 crc kubenswrapper[4858]: I1122 07:33:16.583957 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-mhv5p" event={"ID":"773050fd-b4f6-4413-89d3-f65533c3ee59","Type":"ContainerStarted","Data":"c2595b924c194933d98056679bee5db1225c06f7077e5b5828286465d7ac3b9c"} Nov 22 07:33:16 crc kubenswrapper[4858]: I1122 07:33:16.584403 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:16 crc kubenswrapper[4858]: I1122 07:33:16.605493 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-mhv5p" podStartSLOduration=3.60547086 podStartE2EDuration="3.60547086s" podCreationTimestamp="2025-11-22 07:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:33:16.600955837 +0000 UTC m=+1358.442378843" watchObservedRunningTime="2025-11-22 07:33:16.60547086 +0000 UTC m=+1358.446893866" Nov 22 07:33:17 crc kubenswrapper[4858]: I1122 07:33:17.151448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:17 crc kubenswrapper[4858]: I1122 07:33:17.160265 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5-memberlist\") pod \"speaker-q9l67\" (UID: \"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5\") " pod="metallb-system/speaker-q9l67" Nov 22 07:33:17 crc kubenswrapper[4858]: I1122 07:33:17.321691 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q9l67" Nov 22 07:33:17 crc kubenswrapper[4858]: W1122 07:33:17.358760 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf5f18f1_9ba2_48ab_9a55_2882db9ee7e5.slice/crio-e29dbd3c7f59bec36fa8a4d194d4c240938e6a7e086622dfc53618d1de7e9d22 WatchSource:0}: Error finding container e29dbd3c7f59bec36fa8a4d194d4c240938e6a7e086622dfc53618d1de7e9d22: Status 404 returned error can't find the container with id e29dbd3c7f59bec36fa8a4d194d4c240938e6a7e086622dfc53618d1de7e9d22 Nov 22 07:33:17 crc kubenswrapper[4858]: I1122 07:33:17.595437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q9l67" event={"ID":"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5","Type":"ContainerStarted","Data":"e29dbd3c7f59bec36fa8a4d194d4c240938e6a7e086622dfc53618d1de7e9d22"} Nov 22 07:33:18 crc kubenswrapper[4858]: I1122 07:33:18.602222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q9l67" event={"ID":"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5","Type":"ContainerStarted","Data":"66c86dc8019a05fcd6f80fa33b348748248478d871c6f372e814bead58d43a26"} Nov 22 07:33:19 crc kubenswrapper[4858]: I1122 07:33:19.647555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q9l67" event={"ID":"cf5f18f1-9ba2-48ab-9a55-2882db9ee7e5","Type":"ContainerStarted","Data":"7346dd56ce8d14bd13a897207dbec392ce5326fbe562742156b61f0e01287053"} Nov 22 07:33:19 crc kubenswrapper[4858]: I1122 07:33:19.648929 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-q9l67" Nov 22 07:33:27 crc kubenswrapper[4858]: I1122 07:33:27.326006 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-q9l67" Nov 22 07:33:27 crc kubenswrapper[4858]: I1122 07:33:27.349986 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-q9l67" podStartSLOduration=14.349965528 podStartE2EDuration="14.349965528s" podCreationTimestamp="2025-11-22 07:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:33:19.679707095 +0000 UTC m=+1361.521130131" watchObservedRunningTime="2025-11-22 07:33:27.349965528 +0000 UTC m=+1369.191388554" Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.890841 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq"] Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.892651 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.898505 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.917538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq"] Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.921089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.921155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:28 crc kubenswrapper[4858]: I1122 07:33:28.921401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvmt\" (UniqueName: \"kubernetes.io/projected/3682556b-76ec-407f-a6c5-53062a27cf9d-kube-api-access-7cvmt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.023836 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.024269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvmt\" (UniqueName: \"kubernetes.io/projected/3682556b-76ec-407f-a6c5-53062a27cf9d-kube-api-access-7cvmt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.024356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.024774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.025168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.060165 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvmt\" (UniqueName: \"kubernetes.io/projected/3682556b-76ec-407f-a6c5-53062a27cf9d-kube-api-access-7cvmt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.211671 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.732371 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq"] Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.735348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" event={"ID":"cee9756d-f6aa-49a8-8e38-ccef99e20e55","Type":"ContainerStarted","Data":"3d26e0ea569c2eb25f8a6f19fff36a0273e973cef5aa50dd248692b980817fdb"} Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.735603 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.737337 4858 generic.go:334] "Generic (PLEG): container finished" podID="86e5263b-5daf-410c-8661-068c33c68f38" containerID="f7464b6e0ca72659306c43c0319307bd02ce652e6a951d9b0f7be34bbdd18eca" exitCode=0 Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.737388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerDied","Data":"f7464b6e0ca72659306c43c0319307bd02ce652e6a951d9b0f7be34bbdd18eca"} Nov 22 07:33:29 crc kubenswrapper[4858]: W1122 07:33:29.737568 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3682556b_76ec_407f_a6c5_53062a27cf9d.slice/crio-894c09cd0cfe241f3dee2b29098bd87031f0bd01a672bd43d396584c9c97470b WatchSource:0}: Error finding container 894c09cd0cfe241f3dee2b29098bd87031f0bd01a672bd43d396584c9c97470b: Status 404 returned error can't find the container with id 894c09cd0cfe241f3dee2b29098bd87031f0bd01a672bd43d396584c9c97470b Nov 22 07:33:29 crc kubenswrapper[4858]: I1122 07:33:29.766514 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" podStartSLOduration=1.8062055 podStartE2EDuration="16.76648834s" podCreationTimestamp="2025-11-22 07:33:13 +0000 UTC" firstStartedPulling="2025-11-22 07:33:13.912417521 +0000 UTC m=+1355.753840527" lastFinishedPulling="2025-11-22 07:33:28.872700361 +0000 UTC m=+1370.714123367" observedRunningTime="2025-11-22 07:33:29.761046757 +0000 UTC m=+1371.602469773" watchObservedRunningTime="2025-11-22 07:33:29.76648834 +0000 UTC m=+1371.607911356" Nov 22 07:33:30 crc kubenswrapper[4858]: I1122 07:33:30.748208 4858 generic.go:334] "Generic (PLEG): container finished" podID="86e5263b-5daf-410c-8661-068c33c68f38" containerID="b524f7daf445fb2a72142801254a9c9c1d75b2ad490081d0e479fa0743359a59" exitCode=0 Nov 22 07:33:30 crc kubenswrapper[4858]: I1122 07:33:30.748432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerDied","Data":"b524f7daf445fb2a72142801254a9c9c1d75b2ad490081d0e479fa0743359a59"} Nov 22 07:33:30 crc kubenswrapper[4858]: I1122 07:33:30.753042 4858 generic.go:334] "Generic (PLEG): container finished" podID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerID="14d17e417475477e3acd7273c5d8c33cf5afa67e06478f08840268786fddb415" exitCode=0 Nov 22 07:33:30 crc kubenswrapper[4858]: I1122 07:33:30.753200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" event={"ID":"3682556b-76ec-407f-a6c5-53062a27cf9d","Type":"ContainerDied","Data":"14d17e417475477e3acd7273c5d8c33cf5afa67e06478f08840268786fddb415"} Nov 22 07:33:30 crc kubenswrapper[4858]: I1122 07:33:30.753250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" event={"ID":"3682556b-76ec-407f-a6c5-53062a27cf9d","Type":"ContainerStarted","Data":"894c09cd0cfe241f3dee2b29098bd87031f0bd01a672bd43d396584c9c97470b"} Nov 22 07:33:33 crc kubenswrapper[4858]: I1122 07:33:33.744550 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-mhv5p" Nov 22 07:33:37 crc kubenswrapper[4858]: I1122 07:33:37.792426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"23c5b07607798cd937e6fc11cf74d9e0302df841ac38224fc9da6b6570508682"} Nov 22 07:33:38 crc kubenswrapper[4858]: I1122 07:33:38.799097 4858 generic.go:334] "Generic (PLEG): container finished" podID="86e5263b-5daf-410c-8661-068c33c68f38" containerID="23c5b07607798cd937e6fc11cf74d9e0302df841ac38224fc9da6b6570508682" exitCode=0 Nov 22 07:33:38 crc kubenswrapper[4858]: I1122 07:33:38.799146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerDied","Data":"23c5b07607798cd937e6fc11cf74d9e0302df841ac38224fc9da6b6570508682"} Nov 22 07:33:39 crc kubenswrapper[4858]: I1122 07:33:39.806644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"d414a30ca22af0d5f1d1681f37691432ffed49eef2a6e45eb487f05584b03b0d"} Nov 22 07:33:40 crc kubenswrapper[4858]: I1122 07:33:40.818018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"82962cf438599733003fb6088e3b22181c15c932bfb9f3808d7e4516372a66c8"} Nov 22 07:33:40 crc kubenswrapper[4858]: I1122 07:33:40.818384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"a68b3728f690a9c742b5a5a18fa88361cecbc60d6aed14736588d6de2aa245c6"} Nov 22 07:33:43 crc kubenswrapper[4858]: I1122 07:33:43.655096 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-7s25x" Nov 22 07:33:43 crc kubenswrapper[4858]: I1122 07:33:43.840043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"f561eeb7586c7ca432bf11a766dc8a7b5ea7243221670642f36a5b8c96f1584c"} Nov 22 07:33:43 crc kubenswrapper[4858]: I1122 07:33:43.840093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"19d5a0a28072aa0f905e8e834f8981eee5f65696e2bd06524febff55648b5251"} Nov 22 07:33:44 crc kubenswrapper[4858]: I1122 07:33:44.847538 4858 generic.go:334] "Generic (PLEG): container finished" podID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerID="b2424610fac4b86cad8d9a03e74c3fad3ab2ce429c6ae994d3525a21f343cbe6" exitCode=0 Nov 22 07:33:44 crc kubenswrapper[4858]: I1122 07:33:44.847622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" event={"ID":"3682556b-76ec-407f-a6c5-53062a27cf9d","Type":"ContainerDied","Data":"b2424610fac4b86cad8d9a03e74c3fad3ab2ce429c6ae994d3525a21f343cbe6"} Nov 22 07:33:44 crc kubenswrapper[4858]: I1122 07:33:44.854527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpxrl" event={"ID":"86e5263b-5daf-410c-8661-068c33c68f38","Type":"ContainerStarted","Data":"c5aa8071138c1db19ba3645b6243e4ea8d2bc92989315601360458b33d082a4e"} Nov 22 07:33:44 crc kubenswrapper[4858]: I1122 07:33:44.854754 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:44 crc kubenswrapper[4858]: I1122 07:33:44.889711 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jpxrl" podStartSLOduration=18.194364062 podStartE2EDuration="31.889669998s" podCreationTimestamp="2025-11-22 07:33:13 +0000 UTC" firstStartedPulling="2025-11-22 07:33:15.205591964 +0000 UTC m=+1357.047014970" lastFinishedPulling="2025-11-22 07:33:28.90089789 +0000 UTC m=+1370.742320906" observedRunningTime="2025-11-22 07:33:44.886533748 +0000 UTC m=+1386.727956764" watchObservedRunningTime="2025-11-22 07:33:44.889669998 +0000 UTC m=+1386.731093004" Nov 22 07:33:45 crc kubenswrapper[4858]: I1122 07:33:45.311949 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:33:45 crc kubenswrapper[4858]: I1122 07:33:45.312556 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:33:45 crc kubenswrapper[4858]: I1122 07:33:45.862106 4858 generic.go:334] "Generic (PLEG): container finished" podID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerID="0bee4b11e2099406a6db72d7c3ce272aaaa2f276b43a74e2eef9dcfb67ebb8cf" exitCode=0 Nov 22 07:33:45 crc kubenswrapper[4858]: I1122 07:33:45.862194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" event={"ID":"3682556b-76ec-407f-a6c5-53062a27cf9d","Type":"ContainerDied","Data":"0bee4b11e2099406a6db72d7c3ce272aaaa2f276b43a74e2eef9dcfb67ebb8cf"} Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.111768 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.256496 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-bundle\") pod \"3682556b-76ec-407f-a6c5-53062a27cf9d\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.256652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cvmt\" (UniqueName: \"kubernetes.io/projected/3682556b-76ec-407f-a6c5-53062a27cf9d-kube-api-access-7cvmt\") pod \"3682556b-76ec-407f-a6c5-53062a27cf9d\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.256899 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-util\") pod \"3682556b-76ec-407f-a6c5-53062a27cf9d\" (UID: \"3682556b-76ec-407f-a6c5-53062a27cf9d\") " Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.258026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-bundle" (OuterVolumeSpecName: "bundle") pod "3682556b-76ec-407f-a6c5-53062a27cf9d" (UID: "3682556b-76ec-407f-a6c5-53062a27cf9d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.263132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3682556b-76ec-407f-a6c5-53062a27cf9d-kube-api-access-7cvmt" (OuterVolumeSpecName: "kube-api-access-7cvmt") pod "3682556b-76ec-407f-a6c5-53062a27cf9d" (UID: "3682556b-76ec-407f-a6c5-53062a27cf9d"). InnerVolumeSpecName "kube-api-access-7cvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.269436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-util" (OuterVolumeSpecName: "util") pod "3682556b-76ec-407f-a6c5-53062a27cf9d" (UID: "3682556b-76ec-407f-a6c5-53062a27cf9d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.359204 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.359287 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cvmt\" (UniqueName: \"kubernetes.io/projected/3682556b-76ec-407f-a6c5-53062a27cf9d-kube-api-access-7cvmt\") on node \"crc\" DevicePath \"\"" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.359302 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3682556b-76ec-407f-a6c5-53062a27cf9d-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.876740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" event={"ID":"3682556b-76ec-407f-a6c5-53062a27cf9d","Type":"ContainerDied","Data":"894c09cd0cfe241f3dee2b29098bd87031f0bd01a672bd43d396584c9c97470b"} Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.877140 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c09cd0cfe241f3dee2b29098bd87031f0bd01a672bd43d396584c9c97470b" Nov 22 07:33:47 crc kubenswrapper[4858]: I1122 07:33:47.876805 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a792qq" Nov 22 07:33:48 crc kubenswrapper[4858]: I1122 07:33:48.653272 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:48 crc kubenswrapper[4858]: I1122 07:33:48.692433 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.934701 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf"] Nov 22 07:33:51 crc kubenswrapper[4858]: E1122 07:33:51.935362 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="util" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.935381 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="util" Nov 22 07:33:51 crc kubenswrapper[4858]: E1122 07:33:51.935401 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="pull" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.935409 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="pull" Nov 22 07:33:51 crc kubenswrapper[4858]: E1122 07:33:51.935430 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="extract" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.935439 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="extract" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.935594 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3682556b-76ec-407f-a6c5-53062a27cf9d" containerName="extract" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.936166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.938721 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.945049 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-rmq86" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.945667 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 22 07:33:51 crc kubenswrapper[4858]: I1122 07:33:51.965412 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf"] Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.118009 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c560787a-6b4b-42bc-91aa-add0dc6897d1-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-htqcf\" (UID: \"c560787a-6b4b-42bc-91aa-add0dc6897d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.118127 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6b4\" (UniqueName: \"kubernetes.io/projected/c560787a-6b4b-42bc-91aa-add0dc6897d1-kube-api-access-hz6b4\") pod \"cert-manager-operator-controller-manager-64cf6dff88-htqcf\" (UID: \"c560787a-6b4b-42bc-91aa-add0dc6897d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.218927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c560787a-6b4b-42bc-91aa-add0dc6897d1-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-htqcf\" (UID: \"c560787a-6b4b-42bc-91aa-add0dc6897d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.219016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz6b4\" (UniqueName: \"kubernetes.io/projected/c560787a-6b4b-42bc-91aa-add0dc6897d1-kube-api-access-hz6b4\") pod \"cert-manager-operator-controller-manager-64cf6dff88-htqcf\" (UID: \"c560787a-6b4b-42bc-91aa-add0dc6897d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.219510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c560787a-6b4b-42bc-91aa-add0dc6897d1-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-htqcf\" (UID: \"c560787a-6b4b-42bc-91aa-add0dc6897d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.244191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz6b4\" (UniqueName: \"kubernetes.io/projected/c560787a-6b4b-42bc-91aa-add0dc6897d1-kube-api-access-hz6b4\") pod \"cert-manager-operator-controller-manager-64cf6dff88-htqcf\" (UID: \"c560787a-6b4b-42bc-91aa-add0dc6897d1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.256696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.748175 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf"] Nov 22 07:33:52 crc kubenswrapper[4858]: I1122 07:33:52.905011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" event={"ID":"c560787a-6b4b-42bc-91aa-add0dc6897d1","Type":"ContainerStarted","Data":"3ce717c56ebf0d74b745cc4271b6d4ef3bbb0cf9bef2f83ef377a5806bb2892c"} Nov 22 07:33:53 crc kubenswrapper[4858]: I1122 07:33:53.656022 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jpxrl" Nov 22 07:34:05 crc kubenswrapper[4858]: I1122 07:34:05.996490 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" event={"ID":"c560787a-6b4b-42bc-91aa-add0dc6897d1","Type":"ContainerStarted","Data":"70cd88c508a07f9c8dfb1cd9e79ebb116d3480b83d7567ffa6d6185fe8fe6c69"} Nov 22 07:34:06 crc kubenswrapper[4858]: I1122 07:34:06.020000 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-htqcf" podStartSLOduration=2.146578608 podStartE2EDuration="15.019979686s" podCreationTimestamp="2025-11-22 07:33:51 +0000 UTC" firstStartedPulling="2025-11-22 07:33:52.749858097 +0000 UTC m=+1394.591281113" lastFinishedPulling="2025-11-22 07:34:05.623259185 +0000 UTC m=+1407.464682191" observedRunningTime="2025-11-22 07:34:06.015480252 +0000 UTC m=+1407.856903278" watchObservedRunningTime="2025-11-22 07:34:06.019979686 +0000 UTC m=+1407.861402692" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.707194 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv"] Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.708425 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.710300 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xdvtx" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.710414 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.710687 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.720784 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv"] Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.819262 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2bb9\" (UniqueName: \"kubernetes.io/projected/469865e8-46a9-4329-9420-2146c2968fb9-kube-api-access-f2bb9\") pod \"cert-manager-cainjector-855d9ccff4-tqwrv\" (UID: \"469865e8-46a9-4329-9420-2146c2968fb9\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.819378 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/469865e8-46a9-4329-9420-2146c2968fb9-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-tqwrv\" (UID: \"469865e8-46a9-4329-9420-2146c2968fb9\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.920808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2bb9\" (UniqueName: \"kubernetes.io/projected/469865e8-46a9-4329-9420-2146c2968fb9-kube-api-access-f2bb9\") pod \"cert-manager-cainjector-855d9ccff4-tqwrv\" (UID: \"469865e8-46a9-4329-9420-2146c2968fb9\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.920853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/469865e8-46a9-4329-9420-2146c2968fb9-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-tqwrv\" (UID: \"469865e8-46a9-4329-9420-2146c2968fb9\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.939560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/469865e8-46a9-4329-9420-2146c2968fb9-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-tqwrv\" (UID: \"469865e8-46a9-4329-9420-2146c2968fb9\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:11 crc kubenswrapper[4858]: I1122 07:34:11.939578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2bb9\" (UniqueName: \"kubernetes.io/projected/469865e8-46a9-4329-9420-2146c2968fb9-kube-api-access-f2bb9\") pod \"cert-manager-cainjector-855d9ccff4-tqwrv\" (UID: \"469865e8-46a9-4329-9420-2146c2968fb9\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:12 crc kubenswrapper[4858]: I1122 07:34:12.024267 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" Nov 22 07:34:12 crc kubenswrapper[4858]: I1122 07:34:12.523067 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv"] Nov 22 07:34:12 crc kubenswrapper[4858]: I1122 07:34:12.925060 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hn4hn"] Nov 22 07:34:12 crc kubenswrapper[4858]: I1122 07:34:12.925803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:12 crc kubenswrapper[4858]: I1122 07:34:12.927454 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vxj7b" Nov 22 07:34:12 crc kubenswrapper[4858]: I1122 07:34:12.937457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hn4hn"] Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.033743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62tmz\" (UniqueName: \"kubernetes.io/projected/84ef7f2e-b400-4c2e-ae5f-f6352ce64750-kube-api-access-62tmz\") pod \"cert-manager-webhook-f4fb5df64-hn4hn\" (UID: \"84ef7f2e-b400-4c2e-ae5f-f6352ce64750\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.033897 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ef7f2e-b400-4c2e-ae5f-f6352ce64750-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hn4hn\" (UID: \"84ef7f2e-b400-4c2e-ae5f-f6352ce64750\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.050405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" event={"ID":"469865e8-46a9-4329-9420-2146c2968fb9","Type":"ContainerStarted","Data":"3201fd2eab1b23ab565ecab53ede694be074b64832e38953d0aa5263c2e154ae"} Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.135436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62tmz\" (UniqueName: \"kubernetes.io/projected/84ef7f2e-b400-4c2e-ae5f-f6352ce64750-kube-api-access-62tmz\") pod \"cert-manager-webhook-f4fb5df64-hn4hn\" (UID: \"84ef7f2e-b400-4c2e-ae5f-f6352ce64750\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.135537 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ef7f2e-b400-4c2e-ae5f-f6352ce64750-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hn4hn\" (UID: \"84ef7f2e-b400-4c2e-ae5f-f6352ce64750\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.160239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62tmz\" (UniqueName: \"kubernetes.io/projected/84ef7f2e-b400-4c2e-ae5f-f6352ce64750-kube-api-access-62tmz\") pod \"cert-manager-webhook-f4fb5df64-hn4hn\" (UID: \"84ef7f2e-b400-4c2e-ae5f-f6352ce64750\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.160776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ef7f2e-b400-4c2e-ae5f-f6352ce64750-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-hn4hn\" (UID: \"84ef7f2e-b400-4c2e-ae5f-f6352ce64750\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.243100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:13 crc kubenswrapper[4858]: I1122 07:34:13.457058 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-hn4hn"] Nov 22 07:34:14 crc kubenswrapper[4858]: I1122 07:34:14.057538 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" event={"ID":"84ef7f2e-b400-4c2e-ae5f-f6352ce64750","Type":"ContainerStarted","Data":"2cbcb6a83e8b8a8d091fea0311a377885a4483daf1460664ebfa94b5b61ce182"} Nov 22 07:34:15 crc kubenswrapper[4858]: I1122 07:34:15.312649 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:34:15 crc kubenswrapper[4858]: I1122 07:34:15.312730 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.692123 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-x57c9"] Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.694484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.701255 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-x57c9"] Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.702620 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-htcgp" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.805853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c807a61-e48b-4334-985c-9537372b9e15-bound-sa-token\") pod \"cert-manager-86cb77c54b-x57c9\" (UID: \"3c807a61-e48b-4334-985c-9537372b9e15\") " pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.805917 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwvc\" (UniqueName: \"kubernetes.io/projected/3c807a61-e48b-4334-985c-9537372b9e15-kube-api-access-zpwvc\") pod \"cert-manager-86cb77c54b-x57c9\" (UID: \"3c807a61-e48b-4334-985c-9537372b9e15\") " pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.907779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c807a61-e48b-4334-985c-9537372b9e15-bound-sa-token\") pod \"cert-manager-86cb77c54b-x57c9\" (UID: \"3c807a61-e48b-4334-985c-9537372b9e15\") " pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.907834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwvc\" (UniqueName: \"kubernetes.io/projected/3c807a61-e48b-4334-985c-9537372b9e15-kube-api-access-zpwvc\") pod \"cert-manager-86cb77c54b-x57c9\" (UID: \"3c807a61-e48b-4334-985c-9537372b9e15\") " pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.926485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwvc\" (UniqueName: \"kubernetes.io/projected/3c807a61-e48b-4334-985c-9537372b9e15-kube-api-access-zpwvc\") pod \"cert-manager-86cb77c54b-x57c9\" (UID: \"3c807a61-e48b-4334-985c-9537372b9e15\") " pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:28 crc kubenswrapper[4858]: I1122 07:34:28.933999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c807a61-e48b-4334-985c-9537372b9e15-bound-sa-token\") pod \"cert-manager-86cb77c54b-x57c9\" (UID: \"3c807a61-e48b-4334-985c-9537372b9e15\") " pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:29 crc kubenswrapper[4858]: I1122 07:34:29.024050 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-x57c9" Nov 22 07:34:33 crc kubenswrapper[4858]: I1122 07:34:33.512065 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-x57c9"] Nov 22 07:34:33 crc kubenswrapper[4858]: W1122 07:34:33.517128 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c807a61_e48b_4334_985c_9537372b9e15.slice/crio-e62c6c0c9cbee2a47ee3252c29f8c657e6ff095daf6051ffb727dbc460b21061 WatchSource:0}: Error finding container e62c6c0c9cbee2a47ee3252c29f8c657e6ff095daf6051ffb727dbc460b21061: Status 404 returned error can't find the container with id e62c6c0c9cbee2a47ee3252c29f8c657e6ff095daf6051ffb727dbc460b21061 Nov 22 07:34:34 crc kubenswrapper[4858]: I1122 07:34:34.191242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-x57c9" event={"ID":"3c807a61-e48b-4334-985c-9537372b9e15","Type":"ContainerStarted","Data":"e62c6c0c9cbee2a47ee3252c29f8c657e6ff095daf6051ffb727dbc460b21061"} Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.236816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" event={"ID":"469865e8-46a9-4329-9420-2146c2968fb9","Type":"ContainerStarted","Data":"7ff400d5f225c7be0fa5f40b664a7a5538f4943936be12ef01e8da124d2123d8"} Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.239473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" event={"ID":"84ef7f2e-b400-4c2e-ae5f-f6352ce64750","Type":"ContainerStarted","Data":"d8ad239f307fc70faf931eb3f2593849dc702ab87fc0823c2be36cc5760c9818"} Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.239561 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.242264 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-x57c9" event={"ID":"3c807a61-e48b-4334-985c-9537372b9e15","Type":"ContainerStarted","Data":"d54a9935c15d5ab9a4e6e7e891d680e27ae2be11b4f846a8461f428d83a6e02a"} Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.255971 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-tqwrv" podStartSLOduration=3.109596549 podStartE2EDuration="30.25594647s" podCreationTimestamp="2025-11-22 07:34:11 +0000 UTC" firstStartedPulling="2025-11-22 07:34:12.541705586 +0000 UTC m=+1414.383128592" lastFinishedPulling="2025-11-22 07:34:39.688055507 +0000 UTC m=+1441.529478513" observedRunningTime="2025-11-22 07:34:41.253173872 +0000 UTC m=+1443.094596878" watchObservedRunningTime="2025-11-22 07:34:41.25594647 +0000 UTC m=+1443.097369476" Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.311366 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-x57c9" podStartSLOduration=7.133770082 podStartE2EDuration="13.311340097s" podCreationTimestamp="2025-11-22 07:34:28 +0000 UTC" firstStartedPulling="2025-11-22 07:34:33.518694614 +0000 UTC m=+1435.360117620" lastFinishedPulling="2025-11-22 07:34:39.696264629 +0000 UTC m=+1441.537687635" observedRunningTime="2025-11-22 07:34:41.308278519 +0000 UTC m=+1443.149701525" watchObservedRunningTime="2025-11-22 07:34:41.311340097 +0000 UTC m=+1443.152763113" Nov 22 07:34:41 crc kubenswrapper[4858]: I1122 07:34:41.312094 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" podStartSLOduration=3.06681535 podStartE2EDuration="29.31208952s" podCreationTimestamp="2025-11-22 07:34:12 +0000 UTC" firstStartedPulling="2025-11-22 07:34:13.484556989 +0000 UTC m=+1415.325979995" lastFinishedPulling="2025-11-22 07:34:39.729831159 +0000 UTC m=+1441.571254165" observedRunningTime="2025-11-22 07:34:41.284937215 +0000 UTC m=+1443.126360221" watchObservedRunningTime="2025-11-22 07:34:41.31208952 +0000 UTC m=+1443.153512526" Nov 22 07:34:45 crc kubenswrapper[4858]: I1122 07:34:45.312380 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:34:45 crc kubenswrapper[4858]: I1122 07:34:45.312788 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:34:45 crc kubenswrapper[4858]: I1122 07:34:45.312851 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:34:45 crc kubenswrapper[4858]: I1122 07:34:45.313539 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe678c03c05ee8081bf195d77b88472f1f4c9e342fe01dac378eda1f29d2452e"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:34:45 crc kubenswrapper[4858]: I1122 07:34:45.313608 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://fe678c03c05ee8081bf195d77b88472f1f4c9e342fe01dac378eda1f29d2452e" gracePeriod=600 Nov 22 07:34:46 crc kubenswrapper[4858]: I1122 07:34:46.273939 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="fe678c03c05ee8081bf195d77b88472f1f4c9e342fe01dac378eda1f29d2452e" exitCode=0 Nov 22 07:34:46 crc kubenswrapper[4858]: I1122 07:34:46.273998 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"fe678c03c05ee8081bf195d77b88472f1f4c9e342fe01dac378eda1f29d2452e"} Nov 22 07:34:46 crc kubenswrapper[4858]: I1122 07:34:46.274553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd"} Nov 22 07:34:46 crc kubenswrapper[4858]: I1122 07:34:46.274581 4858 scope.go:117] "RemoveContainer" containerID="7f316aa9fc732e6e2efa18f2b02b78cbee761cabcd6a33c8efb9930c2da311b8" Nov 22 07:34:48 crc kubenswrapper[4858]: I1122 07:34:48.245914 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-hn4hn" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.709579 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xzvw8"] Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.710834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.715881 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.716148 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.722681 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xzvw8"] Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.723139 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ngjjh" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.818395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glq2c\" (UniqueName: \"kubernetes.io/projected/28be8709-1197-4634-8d12-3a16bb4a2e3c-kube-api-access-glq2c\") pod \"openstack-operator-index-xzvw8\" (UID: \"28be8709-1197-4634-8d12-3a16bb4a2e3c\") " pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.919615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glq2c\" (UniqueName: \"kubernetes.io/projected/28be8709-1197-4634-8d12-3a16bb4a2e3c-kube-api-access-glq2c\") pod \"openstack-operator-index-xzvw8\" (UID: \"28be8709-1197-4634-8d12-3a16bb4a2e3c\") " pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:51 crc kubenswrapper[4858]: I1122 07:34:51.939884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glq2c\" (UniqueName: \"kubernetes.io/projected/28be8709-1197-4634-8d12-3a16bb4a2e3c-kube-api-access-glq2c\") pod \"openstack-operator-index-xzvw8\" (UID: \"28be8709-1197-4634-8d12-3a16bb4a2e3c\") " pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:52 crc kubenswrapper[4858]: I1122 07:34:52.032674 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:52 crc kubenswrapper[4858]: I1122 07:34:52.240880 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xzvw8"] Nov 22 07:34:52 crc kubenswrapper[4858]: I1122 07:34:52.323251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xzvw8" event={"ID":"28be8709-1197-4634-8d12-3a16bb4a2e3c","Type":"ContainerStarted","Data":"01f4c618aa0b079fa919696d2ab5230fd495f42f191f7a0c4014bdab72778fd6"} Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.079892 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xzvw8"] Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.688795 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v29kb"] Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.689934 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.694514 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v29kb"] Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.769722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmnb\" (UniqueName: \"kubernetes.io/projected/a39961df-b050-4ca9-8024-6ee2be98002a-kube-api-access-bdmnb\") pod \"openstack-operator-index-v29kb\" (UID: \"a39961df-b050-4ca9-8024-6ee2be98002a\") " pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.871621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdmnb\" (UniqueName: \"kubernetes.io/projected/a39961df-b050-4ca9-8024-6ee2be98002a-kube-api-access-bdmnb\") pod \"openstack-operator-index-v29kb\" (UID: \"a39961df-b050-4ca9-8024-6ee2be98002a\") " pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:34:55 crc kubenswrapper[4858]: I1122 07:34:55.901565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdmnb\" (UniqueName: \"kubernetes.io/projected/a39961df-b050-4ca9-8024-6ee2be98002a-kube-api-access-bdmnb\") pod \"openstack-operator-index-v29kb\" (UID: \"a39961df-b050-4ca9-8024-6ee2be98002a\") " pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:34:56 crc kubenswrapper[4858]: I1122 07:34:56.014285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:34:56 crc kubenswrapper[4858]: I1122 07:34:56.416140 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v29kb"] Nov 22 07:34:57 crc kubenswrapper[4858]: I1122 07:34:57.363555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v29kb" event={"ID":"a39961df-b050-4ca9-8024-6ee2be98002a","Type":"ContainerStarted","Data":"83930b1333355679de2f7f1d20af4304fd74e1d39bf9cba7a811c71c9de7a014"} Nov 22 07:34:58 crc kubenswrapper[4858]: I1122 07:34:58.373662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xzvw8" event={"ID":"28be8709-1197-4634-8d12-3a16bb4a2e3c","Type":"ContainerStarted","Data":"3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d"} Nov 22 07:34:58 crc kubenswrapper[4858]: I1122 07:34:58.373845 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xzvw8" podUID="28be8709-1197-4634-8d12-3a16bb4a2e3c" containerName="registry-server" containerID="cri-o://3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d" gracePeriod=2 Nov 22 07:34:58 crc kubenswrapper[4858]: I1122 07:34:58.393730 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xzvw8" podStartSLOduration=2.291283729 podStartE2EDuration="7.393704802s" podCreationTimestamp="2025-11-22 07:34:51 +0000 UTC" firstStartedPulling="2025-11-22 07:34:52.252131254 +0000 UTC m=+1454.093554260" lastFinishedPulling="2025-11-22 07:34:57.354552327 +0000 UTC m=+1459.195975333" observedRunningTime="2025-11-22 07:34:58.388400613 +0000 UTC m=+1460.229823629" watchObservedRunningTime="2025-11-22 07:34:58.393704802 +0000 UTC m=+1460.235127818" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.377882 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.382547 4858 generic.go:334] "Generic (PLEG): container finished" podID="28be8709-1197-4634-8d12-3a16bb4a2e3c" containerID="3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d" exitCode=0 Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.382710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xzvw8" event={"ID":"28be8709-1197-4634-8d12-3a16bb4a2e3c","Type":"ContainerDied","Data":"3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d"} Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.382895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xzvw8" event={"ID":"28be8709-1197-4634-8d12-3a16bb4a2e3c","Type":"ContainerDied","Data":"01f4c618aa0b079fa919696d2ab5230fd495f42f191f7a0c4014bdab72778fd6"} Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.382793 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xzvw8" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.382982 4858 scope.go:117] "RemoveContainer" containerID="3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.384179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v29kb" event={"ID":"a39961df-b050-4ca9-8024-6ee2be98002a","Type":"ContainerStarted","Data":"d3b1c9e9f0a92e8f1213c683c690917b0d5db0e1013b8b0d290798f55e7e7626"} Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.403336 4858 scope.go:117] "RemoveContainer" containerID="3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d" Nov 22 07:34:59 crc kubenswrapper[4858]: E1122 07:34:59.404899 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d\": container with ID starting with 3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d not found: ID does not exist" containerID="3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.404976 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d"} err="failed to get container status \"3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d\": rpc error: code = NotFound desc = could not find container \"3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d\": container with ID starting with 3b5822467262f1892b504ecc308d3bcbe7e9735ae655f6fc40b5195609caa89d not found: ID does not exist" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.524991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glq2c\" (UniqueName: \"kubernetes.io/projected/28be8709-1197-4634-8d12-3a16bb4a2e3c-kube-api-access-glq2c\") pod \"28be8709-1197-4634-8d12-3a16bb4a2e3c\" (UID: \"28be8709-1197-4634-8d12-3a16bb4a2e3c\") " Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.531329 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28be8709-1197-4634-8d12-3a16bb4a2e3c-kube-api-access-glq2c" (OuterVolumeSpecName: "kube-api-access-glq2c") pod "28be8709-1197-4634-8d12-3a16bb4a2e3c" (UID: "28be8709-1197-4634-8d12-3a16bb4a2e3c"). InnerVolumeSpecName "kube-api-access-glq2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.626771 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glq2c\" (UniqueName: \"kubernetes.io/projected/28be8709-1197-4634-8d12-3a16bb4a2e3c-kube-api-access-glq2c\") on node \"crc\" DevicePath \"\"" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.697749 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v29kb" podStartSLOduration=3.416029364 podStartE2EDuration="4.697726162s" podCreationTimestamp="2025-11-22 07:34:55 +0000 UTC" firstStartedPulling="2025-11-22 07:34:57.351611253 +0000 UTC m=+1459.193034259" lastFinishedPulling="2025-11-22 07:34:58.633308051 +0000 UTC m=+1460.474731057" observedRunningTime="2025-11-22 07:34:59.415146381 +0000 UTC m=+1461.256569397" watchObservedRunningTime="2025-11-22 07:34:59.697726162 +0000 UTC m=+1461.539149188" Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.701061 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-xzvw8"] Nov 22 07:34:59 crc kubenswrapper[4858]: I1122 07:34:59.705083 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-xzvw8"] Nov 22 07:35:01 crc kubenswrapper[4858]: I1122 07:35:01.543910 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28be8709-1197-4634-8d12-3a16bb4a2e3c" path="/var/lib/kubelet/pods/28be8709-1197-4634-8d12-3a16bb4a2e3c/volumes" Nov 22 07:35:06 crc kubenswrapper[4858]: I1122 07:35:06.015143 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:35:06 crc kubenswrapper[4858]: I1122 07:35:06.015615 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:35:06 crc kubenswrapper[4858]: I1122 07:35:06.049580 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:35:06 crc kubenswrapper[4858]: I1122 07:35:06.450455 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v29kb" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.525024 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j"] Nov 22 07:35:07 crc kubenswrapper[4858]: E1122 07:35:07.525609 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28be8709-1197-4634-8d12-3a16bb4a2e3c" containerName="registry-server" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.525627 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="28be8709-1197-4634-8d12-3a16bb4a2e3c" containerName="registry-server" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.525768 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="28be8709-1197-4634-8d12-3a16bb4a2e3c" containerName="registry-server" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.526766 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.529789 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-b6497" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.545643 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j"] Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.640617 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.640758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k975m\" (UniqueName: \"kubernetes.io/projected/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-kube-api-access-k975m\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.640806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.742013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.742083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k975m\" (UniqueName: \"kubernetes.io/projected/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-kube-api-access-k975m\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.742124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.742641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.742765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.776902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k975m\" (UniqueName: \"kubernetes.io/projected/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-kube-api-access-k975m\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:07 crc kubenswrapper[4858]: I1122 07:35:07.848522 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:08 crc kubenswrapper[4858]: I1122 07:35:08.261549 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j"] Nov 22 07:35:08 crc kubenswrapper[4858]: I1122 07:35:08.439219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" event={"ID":"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8","Type":"ContainerStarted","Data":"ff83cf48c044530f6fcb0670d07db1ed4ad882f53769550b78cbd4f2d261c5ea"} Nov 22 07:35:25 crc kubenswrapper[4858]: I1122 07:35:25.541185 4858 generic.go:334] "Generic (PLEG): container finished" podID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerID="a5d055720cd61d2b7bfd36e859b1cd3177240072164c3a4d2977ba47c5ca566b" exitCode=0 Nov 22 07:35:25 crc kubenswrapper[4858]: I1122 07:35:25.542192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" event={"ID":"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8","Type":"ContainerDied","Data":"a5d055720cd61d2b7bfd36e859b1cd3177240072164c3a4d2977ba47c5ca566b"} Nov 22 07:35:27 crc kubenswrapper[4858]: I1122 07:35:27.559880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" event={"ID":"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8","Type":"ContainerStarted","Data":"b30e72ed4d090c1fde7b0f0f5858657f1b545ab68a2faabf6fea52d6fd62977a"} Nov 22 07:35:28 crc kubenswrapper[4858]: I1122 07:35:28.569057 4858 generic.go:334] "Generic (PLEG): container finished" podID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerID="b30e72ed4d090c1fde7b0f0f5858657f1b545ab68a2faabf6fea52d6fd62977a" exitCode=0 Nov 22 07:35:28 crc kubenswrapper[4858]: I1122 07:35:28.569136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" event={"ID":"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8","Type":"ContainerDied","Data":"b30e72ed4d090c1fde7b0f0f5858657f1b545ab68a2faabf6fea52d6fd62977a"} Nov 22 07:35:29 crc kubenswrapper[4858]: I1122 07:35:29.580022 4858 generic.go:334] "Generic (PLEG): container finished" podID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerID="92c5b5cd2040ae18be81461bd5fc3ef8f66c2b2fb989aece952563332f920bb7" exitCode=0 Nov 22 07:35:29 crc kubenswrapper[4858]: I1122 07:35:29.580226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" event={"ID":"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8","Type":"ContainerDied","Data":"92c5b5cd2040ae18be81461bd5fc3ef8f66c2b2fb989aece952563332f920bb7"} Nov 22 07:35:30 crc kubenswrapper[4858]: I1122 07:35:30.825615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:30 crc kubenswrapper[4858]: I1122 07:35:30.926213 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-util\") pod \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " Nov 22 07:35:30 crc kubenswrapper[4858]: I1122 07:35:30.926336 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-bundle\") pod \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " Nov 22 07:35:30 crc kubenswrapper[4858]: I1122 07:35:30.926369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k975m\" (UniqueName: \"kubernetes.io/projected/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-kube-api-access-k975m\") pod \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\" (UID: \"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8\") " Nov 22 07:35:30 crc kubenswrapper[4858]: I1122 07:35:30.927249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-bundle" (OuterVolumeSpecName: "bundle") pod "aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" (UID: "aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:35:30 crc kubenswrapper[4858]: I1122 07:35:30.931395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-kube-api-access-k975m" (OuterVolumeSpecName: "kube-api-access-k975m") pod "aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" (UID: "aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8"). InnerVolumeSpecName "kube-api-access-k975m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.028165 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.028218 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k975m\" (UniqueName: \"kubernetes.io/projected/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-kube-api-access-k975m\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.283603 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-util" (OuterVolumeSpecName: "util") pod "aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" (UID: "aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.332169 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.596715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" event={"ID":"aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8","Type":"ContainerDied","Data":"ff83cf48c044530f6fcb0670d07db1ed4ad882f53769550b78cbd4f2d261c5ea"} Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.596768 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff83cf48c044530f6fcb0670d07db1ed4ad882f53769550b78cbd4f2d261c5ea" Nov 22 07:35:31 crc kubenswrapper[4858]: I1122 07:35:31.596811 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287zfs2j" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.030728 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2"] Nov 22 07:35:35 crc kubenswrapper[4858]: E1122 07:35:35.031367 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="extract" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.031381 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="extract" Nov 22 07:35:35 crc kubenswrapper[4858]: E1122 07:35:35.031395 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="util" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.031402 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="util" Nov 22 07:35:35 crc kubenswrapper[4858]: E1122 07:35:35.031415 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="pull" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.031421 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="pull" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.031544 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf8a9a2-8fc5-4dfc-98b8-d201aa5114e8" containerName="extract" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.032156 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.034891 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-d9fr5" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.063519 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2"] Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.101189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgvk4\" (UniqueName: \"kubernetes.io/projected/eceb5a53-29b2-43c0-99d3-8bc45d7fec80-kube-api-access-bgvk4\") pod \"openstack-operator-controller-operator-8486c7f98b-7vnw2\" (UID: \"eceb5a53-29b2-43c0-99d3-8bc45d7fec80\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.202876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgvk4\" (UniqueName: \"kubernetes.io/projected/eceb5a53-29b2-43c0-99d3-8bc45d7fec80-kube-api-access-bgvk4\") pod \"openstack-operator-controller-operator-8486c7f98b-7vnw2\" (UID: \"eceb5a53-29b2-43c0-99d3-8bc45d7fec80\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.227752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgvk4\" (UniqueName: \"kubernetes.io/projected/eceb5a53-29b2-43c0-99d3-8bc45d7fec80-kube-api-access-bgvk4\") pod \"openstack-operator-controller-operator-8486c7f98b-7vnw2\" (UID: \"eceb5a53-29b2-43c0-99d3-8bc45d7fec80\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.350560 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.603174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2"] Nov 22 07:35:35 crc kubenswrapper[4858]: I1122 07:35:35.628357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" event={"ID":"eceb5a53-29b2-43c0-99d3-8bc45d7fec80","Type":"ContainerStarted","Data":"680ac16729fed218275700c2232074797d5db6dd65ad9e3785c0296cf3ced19b"} Nov 22 07:35:44 crc kubenswrapper[4858]: I1122 07:35:44.698385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" event={"ID":"eceb5a53-29b2-43c0-99d3-8bc45d7fec80","Type":"ContainerStarted","Data":"74dd6cd66759192ceef50e955fdab05abf814abdce462d410859a6ddf8c0fa84"} Nov 22 07:35:55 crc kubenswrapper[4858]: I1122 07:35:55.778654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" event={"ID":"eceb5a53-29b2-43c0-99d3-8bc45d7fec80","Type":"ContainerStarted","Data":"78d8b4f5977ba29357f8d4072c993f69f7ccaee51cb6d0b4072ac12b3eefa7b4"} Nov 22 07:35:55 crc kubenswrapper[4858]: I1122 07:35:55.779533 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:55 crc kubenswrapper[4858]: I1122 07:35:55.781703 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" Nov 22 07:35:55 crc kubenswrapper[4858]: I1122 07:35:55.814258 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-7vnw2" podStartSLOduration=1.306555302 podStartE2EDuration="20.81423519s" podCreationTimestamp="2025-11-22 07:35:35 +0000 UTC" firstStartedPulling="2025-11-22 07:35:35.620220738 +0000 UTC m=+1497.461643744" lastFinishedPulling="2025-11-22 07:35:55.127900626 +0000 UTC m=+1516.969323632" observedRunningTime="2025-11-22 07:35:55.810874643 +0000 UTC m=+1517.652297669" watchObservedRunningTime="2025-11-22 07:35:55.81423519 +0000 UTC m=+1517.655658216" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.036612 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.038489 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.048507 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.049785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.051699 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zrlk2" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.058137 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-l6kmx" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.059384 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.068981 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlj4d\" (UniqueName: \"kubernetes.io/projected/e60163a5-437a-4647-ba1e-ccb800dc2d30-kube-api-access-qlj4d\") pod \"barbican-operator-controller-manager-7768f8c84f-qplxm\" (UID: \"e60163a5-437a-4647-ba1e-ccb800dc2d30\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.079462 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.091878 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.093196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.097823 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-45ddd" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.115659 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.140439 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.141470 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.165690 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7hwcl" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.175165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlj4d\" (UniqueName: \"kubernetes.io/projected/e60163a5-437a-4647-ba1e-ccb800dc2d30-kube-api-access-qlj4d\") pod \"barbican-operator-controller-manager-7768f8c84f-qplxm\" (UID: \"e60163a5-437a-4647-ba1e-ccb800dc2d30\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.234528 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.288469 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcbfg\" (UniqueName: \"kubernetes.io/projected/23bfa545-d340-4a3f-afeb-8e292096cb33-kube-api-access-wcbfg\") pod \"designate-operator-controller-manager-56dfb6b67f-w289j\" (UID: \"23bfa545-d340-4a3f-afeb-8e292096cb33\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.288540 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9tg\" (UniqueName: \"kubernetes.io/projected/9b045d27-9e4d-4615-a7f0-e590d259bab4-kube-api-access-rs9tg\") pod \"glance-operator-controller-manager-8667fbf6f6-2tz7w\" (UID: \"9b045d27-9e4d-4615-a7f0-e590d259bab4\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.288606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56dt\" (UniqueName: \"kubernetes.io/projected/34c4063b-09b1-4591-a026-0bb061649b1a-kube-api-access-d56dt\") pod \"cinder-operator-controller-manager-6d8fd67bf7-7mpq7\" (UID: \"34c4063b-09b1-4591-a026-0bb061649b1a\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.302463 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.303900 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.310549 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-thqtn" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.311289 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.312616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.314531 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-66sw6" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.343447 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-499nc"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.345057 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.351701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pwzwl" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.351910 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.361877 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.380915 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-499nc"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.392135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcbfg\" (UniqueName: \"kubernetes.io/projected/23bfa545-d340-4a3f-afeb-8e292096cb33-kube-api-access-wcbfg\") pod \"designate-operator-controller-manager-56dfb6b67f-w289j\" (UID: \"23bfa545-d340-4a3f-afeb-8e292096cb33\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.392211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs9tg\" (UniqueName: \"kubernetes.io/projected/9b045d27-9e4d-4615-a7f0-e590d259bab4-kube-api-access-rs9tg\") pod \"glance-operator-controller-manager-8667fbf6f6-2tz7w\" (UID: \"9b045d27-9e4d-4615-a7f0-e590d259bab4\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.392278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56dt\" (UniqueName: \"kubernetes.io/projected/34c4063b-09b1-4591-a026-0bb061649b1a-kube-api-access-d56dt\") pod \"cinder-operator-controller-manager-6d8fd67bf7-7mpq7\" (UID: \"34c4063b-09b1-4591-a026-0bb061649b1a\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.393056 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.424846 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.425968 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.438236 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2lzs7" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.455847 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.457238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.464942 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kg7d9" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.482405 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.494426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.494502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg88n\" (UniqueName: \"kubernetes.io/projected/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-kube-api-access-mg88n\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.494612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jrs\" (UniqueName: \"kubernetes.io/projected/032bc77f-0555-4036-9507-fa28e25f89fe-kube-api-access-f5jrs\") pod \"horizon-operator-controller-manager-5d86b44686-spzd8\" (UID: \"032bc77f-0555-4036-9507-fa28e25f89fe\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.494646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7fp\" (UniqueName: \"kubernetes.io/projected/920a745c-fd4f-4a5f-a242-6524b871fd64-kube-api-access-vl7fp\") pod \"heat-operator-controller-manager-bf4c6585d-7qg4d\" (UID: \"920a745c-fd4f-4a5f-a242-6524b871fd64\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.509406 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.510795 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.520696 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lhh5h" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.538431 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.562397 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.584547 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlj4d\" (UniqueName: \"kubernetes.io/projected/e60163a5-437a-4647-ba1e-ccb800dc2d30-kube-api-access-qlj4d\") pod \"barbican-operator-controller-manager-7768f8c84f-qplxm\" (UID: \"e60163a5-437a-4647-ba1e-ccb800dc2d30\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.593013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcbfg\" (UniqueName: \"kubernetes.io/projected/23bfa545-d340-4a3f-afeb-8e292096cb33-kube-api-access-wcbfg\") pod \"designate-operator-controller-manager-56dfb6b67f-w289j\" (UID: \"23bfa545-d340-4a3f-afeb-8e292096cb33\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.593112 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.594467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.596085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7fp\" (UniqueName: \"kubernetes.io/projected/920a745c-fd4f-4a5f-a242-6524b871fd64-kube-api-access-vl7fp\") pod \"heat-operator-controller-manager-bf4c6585d-7qg4d\" (UID: \"920a745c-fd4f-4a5f-a242-6524b871fd64\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.596174 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.596217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg88n\" (UniqueName: \"kubernetes.io/projected/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-kube-api-access-mg88n\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.596257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc77v\" (UniqueName: \"kubernetes.io/projected/b3cf7fca-3d8c-4aae-8974-0ed60d98e105-kube-api-access-rc77v\") pod \"keystone-operator-controller-manager-7879fb76fd-gb8kr\" (UID: \"b3cf7fca-3d8c-4aae-8974-0ed60d98e105\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.596347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5jrs\" (UniqueName: \"kubernetes.io/projected/032bc77f-0555-4036-9507-fa28e25f89fe-kube-api-access-f5jrs\") pod \"horizon-operator-controller-manager-5d86b44686-spzd8\" (UID: \"032bc77f-0555-4036-9507-fa28e25f89fe\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.596364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bwkr\" (UniqueName: \"kubernetes.io/projected/32923d04-1afb-482f-b8f2-30dbe60e166f-kube-api-access-8bwkr\") pod \"ironic-operator-controller-manager-5c75d7c94b-pwftd\" (UID: \"32923d04-1afb-482f-b8f2-30dbe60e166f\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:36:14 crc kubenswrapper[4858]: E1122 07:36:14.596942 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 07:36:14 crc kubenswrapper[4858]: E1122 07:36:14.596990 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert podName:ba1a065a-3e2f-41fd-9eba-761128ddfcdf nodeName:}" failed. No retries permitted until 2025-11-22 07:36:15.096973563 +0000 UTC m=+1536.938396569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert") pod "infra-operator-controller-manager-769d9c7585-499nc" (UID: "ba1a065a-3e2f-41fd-9eba-761128ddfcdf") : secret "infra-operator-webhook-server-cert" not found Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.598094 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs9tg\" (UniqueName: \"kubernetes.io/projected/9b045d27-9e4d-4615-a7f0-e590d259bab4-kube-api-access-rs9tg\") pod \"glance-operator-controller-manager-8667fbf6f6-2tz7w\" (UID: \"9b045d27-9e4d-4615-a7f0-e590d259bab4\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.598174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.620701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56dt\" (UniqueName: \"kubernetes.io/projected/34c4063b-09b1-4591-a026-0bb061649b1a-kube-api-access-d56dt\") pod \"cinder-operator-controller-manager-6d8fd67bf7-7mpq7\" (UID: \"34c4063b-09b1-4591-a026-0bb061649b1a\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.621742 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.623102 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.634645 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pgfjf" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.635423 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.652435 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.653998 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.657801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg88n\" (UniqueName: \"kubernetes.io/projected/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-kube-api-access-mg88n\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.658118 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7fp\" (UniqueName: \"kubernetes.io/projected/920a745c-fd4f-4a5f-a242-6524b871fd64-kube-api-access-vl7fp\") pod \"heat-operator-controller-manager-bf4c6585d-7qg4d\" (UID: \"920a745c-fd4f-4a5f-a242-6524b871fd64\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.658715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.667549 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-cwjfr" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.667836 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-zcxlq" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.667952 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.678958 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.700245 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzdx\" (UniqueName: \"kubernetes.io/projected/c59f7090-9bf4-44d0-b2f7-abc4084741d5-kube-api-access-qgzdx\") pod \"manila-operator-controller-manager-7bb88cb858-cndd9\" (UID: \"c59f7090-9bf4-44d0-b2f7-abc4084741d5\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.700296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dpj5\" (UniqueName: \"kubernetes.io/projected/44cc6988-9131-4dc3-9cab-c871699736e8-kube-api-access-8dpj5\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-c5mcb\" (UID: \"44cc6988-9131-4dc3-9cab-c871699736e8\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.700350 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bwkr\" (UniqueName: \"kubernetes.io/projected/32923d04-1afb-482f-b8f2-30dbe60e166f-kube-api-access-8bwkr\") pod \"ironic-operator-controller-manager-5c75d7c94b-pwftd\" (UID: \"32923d04-1afb-482f-b8f2-30dbe60e166f\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.700442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc77v\" (UniqueName: \"kubernetes.io/projected/b3cf7fca-3d8c-4aae-8974-0ed60d98e105-kube-api-access-rc77v\") pod \"keystone-operator-controller-manager-7879fb76fd-gb8kr\" (UID: \"b3cf7fca-3d8c-4aae-8974-0ed60d98e105\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.716497 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.728475 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.730163 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.733011 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5jrs\" (UniqueName: \"kubernetes.io/projected/032bc77f-0555-4036-9507-fa28e25f89fe-kube-api-access-f5jrs\") pod \"horizon-operator-controller-manager-5d86b44686-spzd8\" (UID: \"032bc77f-0555-4036-9507-fa28e25f89fe\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.753727 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fr7zp" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.756609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc77v\" (UniqueName: \"kubernetes.io/projected/b3cf7fca-3d8c-4aae-8974-0ed60d98e105-kube-api-access-rc77v\") pod \"keystone-operator-controller-manager-7879fb76fd-gb8kr\" (UID: \"b3cf7fca-3d8c-4aae-8974-0ed60d98e105\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.782969 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.786746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bwkr\" (UniqueName: \"kubernetes.io/projected/32923d04-1afb-482f-b8f2-30dbe60e166f-kube-api-access-8bwkr\") pod \"ironic-operator-controller-manager-5c75d7c94b-pwftd\" (UID: \"32923d04-1afb-482f-b8f2-30dbe60e166f\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.801711 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.804298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgzdx\" (UniqueName: \"kubernetes.io/projected/c59f7090-9bf4-44d0-b2f7-abc4084741d5-kube-api-access-qgzdx\") pod \"manila-operator-controller-manager-7bb88cb858-cndd9\" (UID: \"c59f7090-9bf4-44d0-b2f7-abc4084741d5\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.804905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dpj5\" (UniqueName: \"kubernetes.io/projected/44cc6988-9131-4dc3-9cab-c871699736e8-kube-api-access-8dpj5\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-c5mcb\" (UID: \"44cc6988-9131-4dc3-9cab-c871699736e8\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.805054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t84j5\" (UniqueName: \"kubernetes.io/projected/6843bd96-e987-4170-823d-3b929461a48f-kube-api-access-t84j5\") pod \"neutron-operator-controller-manager-66b7d6f598-6m467\" (UID: \"6843bd96-e987-4170-823d-3b929461a48f\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.805129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6ct\" (UniqueName: \"kubernetes.io/projected/ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5-kube-api-access-wm6ct\") pod \"nova-operator-controller-manager-86d796d84d-8njw2\" (UID: \"ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.840217 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.840919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dpj5\" (UniqueName: \"kubernetes.io/projected/44cc6988-9131-4dc3-9cab-c871699736e8-kube-api-access-8dpj5\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-c5mcb\" (UID: \"44cc6988-9131-4dc3-9cab-c871699736e8\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.841587 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.848423 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-45g9p" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.848808 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.858660 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgzdx\" (UniqueName: \"kubernetes.io/projected/c59f7090-9bf4-44d0-b2f7-abc4084741d5-kube-api-access-qgzdx\") pod \"manila-operator-controller-manager-7bb88cb858-cndd9\" (UID: \"c59f7090-9bf4-44d0-b2f7-abc4084741d5\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.871090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.872547 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.872945 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.873285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.893672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.893790 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg"] Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.908976 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8bhp\" (UniqueName: \"kubernetes.io/projected/450117a8-48ee-4b20-8588-a38daf6ff303-kube-api-access-f8bhp\") pod \"octavia-operator-controller-manager-6fdc856c5d-l65cp\" (UID: \"450117a8-48ee-4b20-8588-a38daf6ff303\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.909068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t84j5\" (UniqueName: \"kubernetes.io/projected/6843bd96-e987-4170-823d-3b929461a48f-kube-api-access-t84j5\") pod \"neutron-operator-controller-manager-66b7d6f598-6m467\" (UID: \"6843bd96-e987-4170-823d-3b929461a48f\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.909120 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6ct\" (UniqueName: \"kubernetes.io/projected/ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5-kube-api-access-wm6ct\") pod \"nova-operator-controller-manager-86d796d84d-8njw2\" (UID: \"ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.944968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t84j5\" (UniqueName: \"kubernetes.io/projected/6843bd96-e987-4170-823d-3b929461a48f-kube-api-access-t84j5\") pod \"neutron-operator-controller-manager-66b7d6f598-6m467\" (UID: \"6843bd96-e987-4170-823d-3b929461a48f\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:36:14 crc kubenswrapper[4858]: I1122 07:36:14.953360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6ct\" (UniqueName: \"kubernetes.io/projected/ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5-kube-api-access-wm6ct\") pod \"nova-operator-controller-manager-86d796d84d-8njw2\" (UID: \"ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.013901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.013957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jcm\" (UniqueName: \"kubernetes.io/projected/9fa2db3b-611f-4907-b329-57a5610f6c50-kube-api-access-28jcm\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.014013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8bhp\" (UniqueName: \"kubernetes.io/projected/450117a8-48ee-4b20-8588-a38daf6ff303-kube-api-access-f8bhp\") pod \"octavia-operator-controller-manager-6fdc856c5d-l65cp\" (UID: \"450117a8-48ee-4b20-8588-a38daf6ff303\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.032881 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.034183 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.057858 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nvpg5" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.068892 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.078969 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.092553 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zfgdv" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.103494 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.115676 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.118136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.119977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ssk\" (UniqueName: \"kubernetes.io/projected/db81ffbc-f748-4ece-ac50-189d6811825e-kube-api-access-t5ssk\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-q2z78\" (UID: \"db81ffbc-f748-4ece-ac50-189d6811825e\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.120027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.120091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.120131 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28jcm\" (UniqueName: \"kubernetes.io/projected/9fa2db3b-611f-4907-b329-57a5610f6c50-kube-api-access-28jcm\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.123111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.124487 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.135734 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert podName:9fa2db3b-611f-4907-b329-57a5610f6c50 nodeName:}" failed. No retries permitted until 2025-11-22 07:36:15.635683651 +0000 UTC m=+1537.477106657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" (UID: "9fa2db3b-611f-4907-b329-57a5610f6c50") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.124604 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.136649 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert podName:ba1a065a-3e2f-41fd-9eba-761128ddfcdf nodeName:}" failed. No retries permitted until 2025-11-22 07:36:16.13662075 +0000 UTC m=+1537.978043756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert") pod "infra-operator-controller-manager-769d9c7585-499nc" (UID: "ba1a065a-3e2f-41fd-9eba-761128ddfcdf") : secret "infra-operator-webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.158383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8bhp\" (UniqueName: \"kubernetes.io/projected/450117a8-48ee-4b20-8588-a38daf6ff303-kube-api-access-f8bhp\") pod \"octavia-operator-controller-manager-6fdc856c5d-l65cp\" (UID: \"450117a8-48ee-4b20-8588-a38daf6ff303\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.158697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.183662 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.185336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.189648 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.193346 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tglc2" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.195902 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.196201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.197198 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.203722 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.204819 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9hcc7" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.205400 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.207996 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6g99x" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.214098 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.216736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28jcm\" (UniqueName: \"kubernetes.io/projected/9fa2db3b-611f-4907-b329-57a5610f6c50-kube-api-access-28jcm\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.223041 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.227287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9tkw\" (UniqueName: \"kubernetes.io/projected/13fd8728-3920-4cc8-a6fd-af0593936c76-kube-api-access-n9tkw\") pod \"placement-operator-controller-manager-6dc664666c-m2b42\" (UID: \"13fd8728-3920-4cc8-a6fd-af0593936c76\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.227382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ssk\" (UniqueName: \"kubernetes.io/projected/db81ffbc-f748-4ece-ac50-189d6811825e-kube-api-access-t5ssk\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-q2z78\" (UID: \"db81ffbc-f748-4ece-ac50-189d6811825e\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.233811 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.235116 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.235725 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.251766 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-kt64d" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.255498 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.256770 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.279987 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.280257 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gp57x" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.290415 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.312158 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ssk\" (UniqueName: \"kubernetes.io/projected/db81ffbc-f748-4ece-ac50-189d6811825e-kube-api-access-t5ssk\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-q2z78\" (UID: \"db81ffbc-f748-4ece-ac50-189d6811825e\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wfd\" (UniqueName: \"kubernetes.io/projected/0e34d0d6-23cf-4ebc-ac61-72290f0dbb49-kube-api-access-b9wfd\") pod \"test-operator-controller-manager-8464cf66df-tg7dz\" (UID: \"0e34d0d6-23cf-4ebc-ac61-72290f0dbb49\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9tkw\" (UniqueName: \"kubernetes.io/projected/13fd8728-3920-4cc8-a6fd-af0593936c76-kube-api-access-n9tkw\") pod \"placement-operator-controller-manager-6dc664666c-m2b42\" (UID: \"13fd8728-3920-4cc8-a6fd-af0593936c76\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmg52\" (UniqueName: \"kubernetes.io/projected/13bfdea8-1844-4e91-8d2d-d197ab545051-kube-api-access-pmg52\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329414 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrm2f\" (UniqueName: \"kubernetes.io/projected/6588ae44-3c72-4f10-ab0f-fea80227cfc8-kube-api-access-mrm2f\") pod \"watcher-operator-controller-manager-7cd4fb6f79-sc76l\" (UID: \"6588ae44-3c72-4f10-ab0f-fea80227cfc8\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzk2l\" (UniqueName: \"kubernetes.io/projected/31fb5cb2-a858-4c60-adb0-d876c661b634-kube-api-access-zzk2l\") pod \"swift-operator-controller-manager-799cb6ffd6-w6s65\" (UID: \"31fb5cb2-a858-4c60-adb0-d876c661b634\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.329458 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrpj\" (UniqueName: \"kubernetes.io/projected/a9c2e231-a0b4-4125-9a52-53c0012e33df-kube-api-access-gdrpj\") pod \"telemetry-operator-controller-manager-7798859c74-kpjqt\" (UID: \"a9c2e231-a0b4-4125-9a52-53c0012e33df\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.344208 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.351564 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.357406 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-g86l2" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.365737 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.371250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9tkw\" (UniqueName: \"kubernetes.io/projected/13fd8728-3920-4cc8-a6fd-af0593936c76-kube-api-access-n9tkw\") pod \"placement-operator-controller-manager-6dc664666c-m2b42\" (UID: \"13fd8728-3920-4cc8-a6fd-af0593936c76\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.371694 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w"] Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.381019 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.431073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.432142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9wfd\" (UniqueName: \"kubernetes.io/projected/0e34d0d6-23cf-4ebc-ac61-72290f0dbb49-kube-api-access-b9wfd\") pod \"test-operator-controller-manager-8464cf66df-tg7dz\" (UID: \"0e34d0d6-23cf-4ebc-ac61-72290f0dbb49\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.432582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmg52\" (UniqueName: \"kubernetes.io/projected/13bfdea8-1844-4e91-8d2d-d197ab545051-kube-api-access-pmg52\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.432797 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrm2f\" (UniqueName: \"kubernetes.io/projected/6588ae44-3c72-4f10-ab0f-fea80227cfc8-kube-api-access-mrm2f\") pod \"watcher-operator-controller-manager-7cd4fb6f79-sc76l\" (UID: \"6588ae44-3c72-4f10-ab0f-fea80227cfc8\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.432842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzk2l\" (UniqueName: \"kubernetes.io/projected/31fb5cb2-a858-4c60-adb0-d876c661b634-kube-api-access-zzk2l\") pod \"swift-operator-controller-manager-799cb6ffd6-w6s65\" (UID: \"31fb5cb2-a858-4c60-adb0-d876c661b634\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.432868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdrpj\" (UniqueName: \"kubernetes.io/projected/a9c2e231-a0b4-4125-9a52-53c0012e33df-kube-api-access-gdrpj\") pod \"telemetry-operator-controller-manager-7798859c74-kpjqt\" (UID: \"a9c2e231-a0b4-4125-9a52-53c0012e33df\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.432940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbng2\" (UniqueName: \"kubernetes.io/projected/03950dc4-3e8b-4723-a446-da38e0918da9-kube-api-access-rbng2\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w\" (UID: \"03950dc4-3e8b-4723-a446-da38e0918da9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.431423 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.435824 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert podName:13bfdea8-1844-4e91-8d2d-d197ab545051 nodeName:}" failed. No retries permitted until 2025-11-22 07:36:15.935795391 +0000 UTC m=+1537.777218457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-rrm27" (UID: "13bfdea8-1844-4e91-8d2d-d197ab545051") : secret "webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.507215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdrpj\" (UniqueName: \"kubernetes.io/projected/a9c2e231-a0b4-4125-9a52-53c0012e33df-kube-api-access-gdrpj\") pod \"telemetry-operator-controller-manager-7798859c74-kpjqt\" (UID: \"a9c2e231-a0b4-4125-9a52-53c0012e33df\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.512702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrm2f\" (UniqueName: \"kubernetes.io/projected/6588ae44-3c72-4f10-ab0f-fea80227cfc8-kube-api-access-mrm2f\") pod \"watcher-operator-controller-manager-7cd4fb6f79-sc76l\" (UID: \"6588ae44-3c72-4f10-ab0f-fea80227cfc8\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.526884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzk2l\" (UniqueName: \"kubernetes.io/projected/31fb5cb2-a858-4c60-adb0-d876c661b634-kube-api-access-zzk2l\") pod \"swift-operator-controller-manager-799cb6ffd6-w6s65\" (UID: \"31fb5cb2-a858-4c60-adb0-d876c661b634\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.527818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9wfd\" (UniqueName: \"kubernetes.io/projected/0e34d0d6-23cf-4ebc-ac61-72290f0dbb49-kube-api-access-b9wfd\") pod \"test-operator-controller-manager-8464cf66df-tg7dz\" (UID: \"0e34d0d6-23cf-4ebc-ac61-72290f0dbb49\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.534206 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbng2\" (UniqueName: \"kubernetes.io/projected/03950dc4-3e8b-4723-a446-da38e0918da9-kube-api-access-rbng2\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w\" (UID: \"03950dc4-3e8b-4723-a446-da38e0918da9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.536057 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.543623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmg52\" (UniqueName: \"kubernetes.io/projected/13bfdea8-1844-4e91-8d2d-d197ab545051-kube-api-access-pmg52\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.599210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbng2\" (UniqueName: \"kubernetes.io/projected/03950dc4-3e8b-4723-a446-da38e0918da9-kube-api-access-rbng2\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w\" (UID: \"03950dc4-3e8b-4723-a446-da38e0918da9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.636621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.636765 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: E1122 07:36:15.636816 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert podName:9fa2db3b-611f-4907-b329-57a5610f6c50 nodeName:}" failed. No retries permitted until 2025-11-22 07:36:16.63680251 +0000 UTC m=+1538.478225516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" (UID: "9fa2db3b-611f-4907-b329-57a5610f6c50") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.734983 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.767301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.972828 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:36:15 crc kubenswrapper[4858]: I1122 07:36:15.980258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm"] Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.014009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:16 crc kubenswrapper[4858]: E1122 07:36:16.014276 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 22 07:36:16 crc kubenswrapper[4858]: E1122 07:36:16.014351 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert podName:13bfdea8-1844-4e91-8d2d-d197ab545051 nodeName:}" failed. No retries permitted until 2025-11-22 07:36:17.014335598 +0000 UTC m=+1538.855758604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-rrm27" (UID: "13bfdea8-1844-4e91-8d2d-d197ab545051") : secret "webhook-server-cert" not found Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.057930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" event={"ID":"e60163a5-437a-4647-ba1e-ccb800dc2d30","Type":"ContainerStarted","Data":"a4fc33526a13707f4b87390608f56c31293b1df4d850e6c6388a735bf21de923"} Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.145000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.218449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:16 crc kubenswrapper[4858]: E1122 07:36:16.218923 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 07:36:16 crc kubenswrapper[4858]: E1122 07:36:16.218993 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert podName:ba1a065a-3e2f-41fd-9eba-761128ddfcdf nodeName:}" failed. No retries permitted until 2025-11-22 07:36:18.218970122 +0000 UTC m=+1540.060393128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert") pod "infra-operator-controller-manager-769d9c7585-499nc" (UID: "ba1a065a-3e2f-41fd-9eba-761128ddfcdf") : secret "infra-operator-webhook-server-cert" not found Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.308783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.349050 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j"] Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.356668 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7"] Nov 22 07:36:16 crc kubenswrapper[4858]: W1122 07:36:16.522888 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23bfa545_d340_4a3f_afeb_8e292096cb33.slice/crio-09796e731ed94aee7d8ddc8659e630404bd5eb78d9eb494c30f2eb3bef0c930a WatchSource:0}: Error finding container 09796e731ed94aee7d8ddc8659e630404bd5eb78d9eb494c30f2eb3bef0c930a: Status 404 returned error can't find the container with id 09796e731ed94aee7d8ddc8659e630404bd5eb78d9eb494c30f2eb3bef0c930a Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.553938 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w"] Nov 22 07:36:16 crc kubenswrapper[4858]: I1122 07:36:16.739431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:16 crc kubenswrapper[4858]: E1122 07:36:16.739629 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:36:16 crc kubenswrapper[4858]: E1122 07:36:16.739701 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert podName:9fa2db3b-611f-4907-b329-57a5610f6c50 nodeName:}" failed. No retries permitted until 2025-11-22 07:36:18.739681846 +0000 UTC m=+1540.581104852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" (UID: "9fa2db3b-611f-4907-b329-57a5610f6c50") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.044820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.057666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13bfdea8-1844-4e91-8d2d-d197ab545051-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rrm27\" (UID: \"13bfdea8-1844-4e91-8d2d-d197ab545051\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.068370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" event={"ID":"9b045d27-9e4d-4615-a7f0-e590d259bab4","Type":"ContainerStarted","Data":"496359b166b737abe23c3b99aaad092a3cdc9668f6bfbf4b1ef8705877aed25d"} Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.073121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" event={"ID":"34c4063b-09b1-4591-a026-0bb061649b1a","Type":"ContainerStarted","Data":"c5579e4b4007230dafdfd5fd1c5edcdbafbbb03cab4b85be76a9235788309a9e"} Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.084487 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" event={"ID":"23bfa545-d340-4a3f-afeb-8e292096cb33","Type":"ContainerStarted","Data":"09796e731ed94aee7d8ddc8659e630404bd5eb78d9eb494c30f2eb3bef0c930a"} Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.112092 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.347593 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.464368 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.837157 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.868402 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.886619 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8"] Nov 22 07:36:17 crc kubenswrapper[4858]: W1122 07:36:17.904567 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb81ffbc_f748_4ece_ac50_189d6811825e.slice/crio-02fe0f6f8a5f032a9119efe2fa995c026bc6e6d1cb854f6e6ad720681c138db5 WatchSource:0}: Error finding container 02fe0f6f8a5f032a9119efe2fa995c026bc6e6d1cb854f6e6ad720681c138db5: Status 404 returned error can't find the container with id 02fe0f6f8a5f032a9119efe2fa995c026bc6e6d1cb854f6e6ad720681c138db5 Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.905417 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.910372 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd"] Nov 22 07:36:17 crc kubenswrapper[4858]: W1122 07:36:17.935553 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod032bc77f_0555_4036_9507_fa28e25f89fe.slice/crio-d17e5d790f47b652fd295d11195aa6519c9f2544c1a162ad84a7aaeeae9c9e76 WatchSource:0}: Error finding container d17e5d790f47b652fd295d11195aa6519c9f2544c1a162ad84a7aaeeae9c9e76: Status 404 returned error can't find the container with id d17e5d790f47b652fd295d11195aa6519c9f2544c1a162ad84a7aaeeae9c9e76 Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.943310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.956525 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467"] Nov 22 07:36:17 crc kubenswrapper[4858]: I1122 07:36:17.984532 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp"] Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:17.999348 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65"] Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.026065 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w"] Nov 22 07:36:18 crc kubenswrapper[4858]: W1122 07:36:18.038284 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6588ae44_3c72_4f10_ab0f_fea80227cfc8.slice/crio-1177b069015547ccf4eeb40906b1b03e064f1d8011de1f11f387f5ffad562040 WatchSource:0}: Error finding container 1177b069015547ccf4eeb40906b1b03e064f1d8011de1f11f387f5ffad562040: Status 404 returned error can't find the container with id 1177b069015547ccf4eeb40906b1b03e064f1d8011de1f11f387f5ffad562040 Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.039817 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l"] Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.045599 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb"] Nov 22 07:36:18 crc kubenswrapper[4858]: E1122 07:36:18.060455 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrm2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-sc76l_openstack-operators(6588ae44-3c72-4f10-ab0f-fea80227cfc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:36:18 crc kubenswrapper[4858]: E1122 07:36:18.098837 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dpj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6f8c5b86cb-c5mcb_openstack-operators(44cc6988-9131-4dc3-9cab-c871699736e8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.103086 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" event={"ID":"32923d04-1afb-482f-b8f2-30dbe60e166f","Type":"ContainerStarted","Data":"1972ae39562481a14146e7a91a1d8351314aa0e89cd17a322c650c8d5e149abb"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.105508 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" event={"ID":"6843bd96-e987-4170-823d-3b929461a48f","Type":"ContainerStarted","Data":"1eaed3e8db6e4ebc4fc68501b4db8f93438255bfeab70914d4f513d54629c2b5"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.108201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" event={"ID":"6588ae44-3c72-4f10-ab0f-fea80227cfc8","Type":"ContainerStarted","Data":"1177b069015547ccf4eeb40906b1b03e064f1d8011de1f11f387f5ffad562040"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.110128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" event={"ID":"b3cf7fca-3d8c-4aae-8974-0ed60d98e105","Type":"ContainerStarted","Data":"145222d6faabd182acf4f90ff6f50eb3eff9a17213fcde2ccb59f8181535a19c"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.114097 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" event={"ID":"a9c2e231-a0b4-4125-9a52-53c0012e33df","Type":"ContainerStarted","Data":"f4aa9d2ad315abf9fd03e4d98be0b3f266c51a65ff7a7be29938bfbc02b54e28"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.117256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" event={"ID":"920a745c-fd4f-4a5f-a242-6524b871fd64","Type":"ContainerStarted","Data":"d27bd98c8c0dc8c11db531e87f34e5d7f57b707ca175e1fe3c3fec7da25c327f"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.119798 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" event={"ID":"44cc6988-9131-4dc3-9cab-c871699736e8","Type":"ContainerStarted","Data":"f78df3d16f810bfea839b0a9870d29003ebd3975ed88faf1c7c12a50661d6dcb"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.123294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" event={"ID":"03950dc4-3e8b-4723-a446-da38e0918da9","Type":"ContainerStarted","Data":"944c5c5bae86a8a6fb661a338732ff5e6143cc9c509f4157596dbbf82c14c315"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.126435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" event={"ID":"032bc77f-0555-4036-9507-fa28e25f89fe","Type":"ContainerStarted","Data":"d17e5d790f47b652fd295d11195aa6519c9f2544c1a162ad84a7aaeeae9c9e76"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.128667 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" event={"ID":"c59f7090-9bf4-44d0-b2f7-abc4084741d5","Type":"ContainerStarted","Data":"b9e1d9bec0dae50469249ab1ab29958d20c5de256c36128753736434dfa9578b"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.133866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" event={"ID":"db81ffbc-f748-4ece-ac50-189d6811825e","Type":"ContainerStarted","Data":"02fe0f6f8a5f032a9119efe2fa995c026bc6e6d1cb854f6e6ad720681c138db5"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.135149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" event={"ID":"ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5","Type":"ContainerStarted","Data":"da71ba4985b8e87a9954a48640eb054343d20551debb6d9b045024ecdd63d3c1"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.136730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" event={"ID":"31fb5cb2-a858-4c60-adb0-d876c661b634","Type":"ContainerStarted","Data":"03d4fef17c099794a815ccf9190afe7915afeded8939c02b75c7f78418767faf"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.138454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" event={"ID":"450117a8-48ee-4b20-8588-a38daf6ff303","Type":"ContainerStarted","Data":"10f6c524d509e4ad26bef2febee38a4f70514fc55f71374a349639c0f8fd31c7"} Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.224675 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42"] Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.269467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.284942 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba1a065a-3e2f-41fd-9eba-761128ddfcdf-cert\") pod \"infra-operator-controller-manager-769d9c7585-499nc\" (UID: \"ba1a065a-3e2f-41fd-9eba-761128ddfcdf\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.315335 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27"] Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.340714 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz"] Nov 22 07:36:18 crc kubenswrapper[4858]: E1122 07:36:18.410052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" podUID="6588ae44-3c72-4f10-ab0f-fea80227cfc8" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.478992 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:36:18 crc kubenswrapper[4858]: E1122 07:36:18.489649 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" podUID="44cc6988-9131-4dc3-9cab-c871699736e8" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.780136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.788678 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fa2db3b-611f-4907-b329-57a5610f6c50-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg\" (UID: \"9fa2db3b-611f-4907-b329-57a5610f6c50\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:18 crc kubenswrapper[4858]: I1122 07:36:18.868338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.139374 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-499nc"] Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.170204 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" event={"ID":"0e34d0d6-23cf-4ebc-ac61-72290f0dbb49","Type":"ContainerStarted","Data":"585a0b4415f9d5588ae60a075a7006a8a9fe9aad9b39b352fc9f186ba577505a"} Nov 22 07:36:19 crc kubenswrapper[4858]: W1122 07:36:19.171327 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba1a065a_3e2f_41fd_9eba_761128ddfcdf.slice/crio-9fe8ed1cb61c7baa6f644005c6a364e9da9d6437977ab5ec68dfa7320b50496e WatchSource:0}: Error finding container 9fe8ed1cb61c7baa6f644005c6a364e9da9d6437977ab5ec68dfa7320b50496e: Status 404 returned error can't find the container with id 9fe8ed1cb61c7baa6f644005c6a364e9da9d6437977ab5ec68dfa7320b50496e Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.180377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" event={"ID":"13fd8728-3920-4cc8-a6fd-af0593936c76","Type":"ContainerStarted","Data":"9681ec15a0dca6afa5ea6a8ad7d79788588089a500453bcf94c24debdc1f1f57"} Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.232054 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" event={"ID":"44cc6988-9131-4dc3-9cab-c871699736e8","Type":"ContainerStarted","Data":"2fb9e14034cc20c456d07a5990c38e5228916fafc486515e2e8d6f5873119525"} Nov 22 07:36:19 crc kubenswrapper[4858]: E1122 07:36:19.241504 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" podUID="44cc6988-9131-4dc3-9cab-c871699736e8" Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.255056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" event={"ID":"13bfdea8-1844-4e91-8d2d-d197ab545051","Type":"ContainerStarted","Data":"fffd4ebf4cfabceb6b05a19fd1621da38bc64dd5226f3e43cf965e436ef4af87"} Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.255108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" event={"ID":"13bfdea8-1844-4e91-8d2d-d197ab545051","Type":"ContainerStarted","Data":"6caabe63b61a44f5249a49660c21f60baef41527e0db4e876a6e820042a085ff"} Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.255120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" event={"ID":"13bfdea8-1844-4e91-8d2d-d197ab545051","Type":"ContainerStarted","Data":"85f9f85fa8daebb65de71e5297fd59a2f21cd062ee23471de8c39129008c60fd"} Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.256456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.281432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" event={"ID":"6588ae44-3c72-4f10-ab0f-fea80227cfc8","Type":"ContainerStarted","Data":"27a2279771eb860c0162b1b3b05fc3cde5614b9a4f25357b978ba0396ec66247"} Nov 22 07:36:19 crc kubenswrapper[4858]: E1122 07:36:19.286468 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" podUID="6588ae44-3c72-4f10-ab0f-fea80227cfc8" Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.318228 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" podStartSLOduration=4.318201195 podStartE2EDuration="4.318201195s" podCreationTimestamp="2025-11-22 07:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:36:19.314276369 +0000 UTC m=+1541.155699375" watchObservedRunningTime="2025-11-22 07:36:19.318201195 +0000 UTC m=+1541.159624201" Nov 22 07:36:19 crc kubenswrapper[4858]: I1122 07:36:19.623187 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg"] Nov 22 07:36:20 crc kubenswrapper[4858]: I1122 07:36:20.332951 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" event={"ID":"9fa2db3b-611f-4907-b329-57a5610f6c50","Type":"ContainerStarted","Data":"51c81b8cdbf02af232cb90f370a6e2e4165e9359ad47c4885b058ae049e554c7"} Nov 22 07:36:20 crc kubenswrapper[4858]: I1122 07:36:20.337976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" event={"ID":"ba1a065a-3e2f-41fd-9eba-761128ddfcdf","Type":"ContainerStarted","Data":"9fe8ed1cb61c7baa6f644005c6a364e9da9d6437977ab5ec68dfa7320b50496e"} Nov 22 07:36:20 crc kubenswrapper[4858]: E1122 07:36:20.339587 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" podUID="44cc6988-9131-4dc3-9cab-c871699736e8" Nov 22 07:36:20 crc kubenswrapper[4858]: E1122 07:36:20.343735 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" podUID="6588ae44-3c72-4f10-ab0f-fea80227cfc8" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.077467 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sq482"] Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.079402 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.088338 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq482"] Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.218423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-utilities\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.218896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfmq\" (UniqueName: \"kubernetes.io/projected/c703fd8c-fe17-4db6-a2d5-e790e620acd0-kube-api-access-wgfmq\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.218963 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-catalog-content\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.320268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-utilities\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.320358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgfmq\" (UniqueName: \"kubernetes.io/projected/c703fd8c-fe17-4db6-a2d5-e790e620acd0-kube-api-access-wgfmq\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.320413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-catalog-content\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.320929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-utilities\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.320977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-catalog-content\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.350438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgfmq\" (UniqueName: \"kubernetes.io/projected/c703fd8c-fe17-4db6-a2d5-e790e620acd0-kube-api-access-wgfmq\") pod \"certified-operators-sq482\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:22 crc kubenswrapper[4858]: I1122 07:36:22.426330 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:36:23 crc kubenswrapper[4858]: I1122 07:36:23.752002 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq482"] Nov 22 07:36:27 crc kubenswrapper[4858]: I1122 07:36:27.121866 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rrm27" Nov 22 07:36:28 crc kubenswrapper[4858]: I1122 07:36:28.500175 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerStarted","Data":"e2303519159ff59cada2ca66c503fad7bbe49d5874665964258d37d3f608764b"} Nov 22 07:36:34 crc kubenswrapper[4858]: E1122 07:36:34.978413 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 22 07:36:34 crc kubenswrapper[4858]: E1122 07:36:34.979086 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rc77v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7879fb76fd-gb8kr_openstack-operators(b3cf7fca-3d8c-4aae-8974-0ed60d98e105): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:36 crc kubenswrapper[4858]: E1122 07:36:36.765240 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 22 07:36:36 crc kubenswrapper[4858]: E1122 07:36:36.765818 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t5ssk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5bdf4f7f7f-q2z78_openstack-operators(db81ffbc-f748-4ece-ac50-189d6811825e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:37 crc kubenswrapper[4858]: E1122 07:36:37.659345 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 22 07:36:37 crc kubenswrapper[4858]: E1122 07:36:37.659989 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5jrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5d86b44686-spzd8_openstack-operators(032bc77f-0555-4036-9507-fa28e25f89fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:38 crc kubenswrapper[4858]: E1122 07:36:38.648861 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 22 07:36:38 crc kubenswrapper[4858]: E1122 07:36:38.649362 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcbfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-56dfb6b67f-w289j_openstack-operators(23bfa545-d340-4a3f-afeb-8e292096cb33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.189115 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b28vs"] Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.190816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.202852 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b28vs"] Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.224355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-utilities\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.224823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-catalog-content\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.224862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j689\" (UniqueName: \"kubernetes.io/projected/f4af3305-c68f-4034-ac7d-9002c41a711a-kube-api-access-5j689\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.327480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-utilities\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.327604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-catalog-content\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.327626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j689\" (UniqueName: \"kubernetes.io/projected/f4af3305-c68f-4034-ac7d-9002c41a711a-kube-api-access-5j689\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.328134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-catalog-content\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.328429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-utilities\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.360382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j689\" (UniqueName: \"kubernetes.io/projected/f4af3305-c68f-4034-ac7d-9002c41a711a-kube-api-access-5j689\") pod \"redhat-operators-b28vs\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: I1122 07:36:39.526150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:36:39 crc kubenswrapper[4858]: E1122 07:36:39.730525 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9" Nov 22 07:36:39 crc kubenswrapper[4858]: E1122 07:36:39.730770 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d56dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6d8fd67bf7-7mpq7_openstack-operators(34c4063b-09b1-4591-a026-0bb061649b1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:44 crc kubenswrapper[4858]: I1122 07:36:44.863665 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwksg"] Nov 22 07:36:44 crc kubenswrapper[4858]: I1122 07:36:44.866433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:44 crc kubenswrapper[4858]: I1122 07:36:44.880481 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwksg"] Nov 22 07:36:44 crc kubenswrapper[4858]: I1122 07:36:44.915818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jc7v\" (UniqueName: \"kubernetes.io/projected/c136ac8d-cfc2-4ad9-ae4c-adef04349862-kube-api-access-6jc7v\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:44 crc kubenswrapper[4858]: I1122 07:36:44.915937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136ac8d-cfc2-4ad9-ae4c-adef04349862-catalog-content\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:44 crc kubenswrapper[4858]: I1122 07:36:44.915973 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136ac8d-cfc2-4ad9-ae4c-adef04349862-utilities\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.019419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136ac8d-cfc2-4ad9-ae4c-adef04349862-catalog-content\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.019480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136ac8d-cfc2-4ad9-ae4c-adef04349862-utilities\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.019621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jc7v\" (UniqueName: \"kubernetes.io/projected/c136ac8d-cfc2-4ad9-ae4c-adef04349862-kube-api-access-6jc7v\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.020473 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136ac8d-cfc2-4ad9-ae4c-adef04349862-catalog-content\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.020680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136ac8d-cfc2-4ad9-ae4c-adef04349862-utilities\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.043434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jc7v\" (UniqueName: \"kubernetes.io/projected/c136ac8d-cfc2-4ad9-ae4c-adef04349862-kube-api-access-6jc7v\") pod \"community-operators-dwksg\" (UID: \"c136ac8d-cfc2-4ad9-ae4c-adef04349862\") " pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:45 crc kubenswrapper[4858]: I1122 07:36:45.194177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:36:56 crc kubenswrapper[4858]: E1122 07:36:56.041354 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 22 07:36:56 crc kubenswrapper[4858]: E1122 07:36:56.042007 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mg88n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-769d9c7585-499nc_openstack-operators(ba1a065a-3e2f-41fd-9eba-761128ddfcdf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:56 crc kubenswrapper[4858]: E1122 07:36:56.567804 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 22 07:36:56 crc kubenswrapper[4858]: E1122 07:36:56.568357 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vl7fp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-bf4c6585d-7qg4d_openstack-operators(920a745c-fd4f-4a5f-a242-6524b871fd64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:57 crc kubenswrapper[4858]: E1122 07:36:57.820732 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 22 07:36:57 crc kubenswrapper[4858]: E1122 07:36:57.820950 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t84j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-66b7d6f598-6m467_openstack-operators(6843bd96-e987-4170-823d-3b929461a48f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:58 crc kubenswrapper[4858]: E1122 07:36:58.261343 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 22 07:36:58 crc kubenswrapper[4858]: E1122 07:36:58.261579 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wm6ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-86d796d84d-8njw2_openstack-operators(ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:59 crc kubenswrapper[4858]: E1122 07:36:59.002710 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 22 07:36:59 crc kubenswrapper[4858]: E1122 07:36:59.004131 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:7dbadf7b98f2f305f9f1382f55a084c8ca404f4263f76b28e56bd0dc437e2192,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:0473ff9eec0da231e2d0a10bf1abbe1dfa1a0f95b8f619e3a07605386951449a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:c8101c77a82eae4407e41e1fd766dfc6e1b7f9ed1679e3efb6f91ff97a1557b2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:eb9743b21bbadca6f7cb9ac4fc46b5d58c51c674073c7e1121f4474a71304071,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:3d81f839b98c2e2a5bf0da79f2f9a92dff7d0a3c5a830b0e95c89dad8cf98a6a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:d19ac99249b47dd8ea16cd6aaa5756346aa8a2f119ee50819c15c5366efb417d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:4f1fa337760e82bfd67cdd142a97c121146dd7e621daac161940dd5e4ddb80dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:3613b345d5baed98effd906f8b0242d863e14c97078ea473ef01fe1b0afc46f3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b73ad22b4955b06d584bce81742556d8c0c7828c495494f8ea7c99391c61b70f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:aa1d3aaf6b394621ed4089a98e0a82b763f467e8b5c5db772f9fdf99fc86e333,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:d6661053141b6df421288a7c9968a155ab82e478c1d75ab41f2cebe2f0ca02d2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:ce2d63258cb4e7d0d1c07234de6889c5434464190906798019311a1c7cf6387f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:0485ef9e5b4437f7cd2ba54034a87722ce4669ee86b3773c6b0c037ed8000e91,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:962c004551d0503779364b767b9bf0cecdf78dbba8809b2ca8b073f58e1f4e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:0ebf4c465fb6cc7dad9e6cb2da0ff54874c9acbcb40d62234a629ec2c12cdd62,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:ff0c553ceeb2e0f44b010e37dc6d0db8a251797b88e56468b7cf7f05253e4232,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:624f553f073af7493d34828b074adc9981cce403edd8e71482c7307008479fd9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:e3874936a518c8560339db8f840fc5461885819f6050b5de8d3ab9199bea5094,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:1cea25f1d2a45affc80c46fb9d427749d3f06b61590ac6070a2910e3ec8a4e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:e36d5b9a65194f12f7b01c6422ba3ed52a687fd1695fbb21f4986c67d9f9317f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:8b21bec527d54cd766e277889df6bcccd2baeaa946274606b986c0c3b7ca689f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:45aceca77f8fcf61127f0da650bdfdf11ede9b0944c78b63fab819d03283f96b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:709ac58998927dd61786821ae1e63343fd97ccf5763aac5edb4583eea9401d22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:867d4ef7c21f75e6030a685b5762ab4d84b671316ed6b98d75200076e93342cd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:2b90da93550b99d2fcfa95bd819f3363aa68346a416f8dc7baac3e9c5f487761,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:8cde52cef8795d1c91983b100d86541c7718160ec260fe0f97b96add4c2c8ee8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:835ebed082fe1c45bd799d1d5357595ce63efeb05ca876f26b08443facb9c164,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:011d682241db724bc40736c9b54d2ea450ea7e6be095b1ff5fa28c8007466775,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:2025da90cff8f563deb08bee71efe16d4078edc2a767b2e225cca5c77f1aa2f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:ff46cd5e0e13d105c4629e78c2734a50835f06b6a1e31da9e0462981d10c4be3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:5b4fd0c2b76fa5539f74687b11c5882d77bd31352452322b37ff51fa18f12a61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:5e03376bd895346dc8f627ca15ded942526ed8b5e92872f453ce272e694d18d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:36a0fb31978aee0ded2483de311631e64a644d0b0685b5b055f65ede7eb8e8a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:5f6045841aff0fde6f684a34cdf49f8dc7b2c3bcbdeab201f1058971e0c5f79e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:448f4e1b740c30936e340bd6e8534d78c83357bf373a4223950aa64d3484f007,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:b68e3615af8a0eb0ef6bf9ceeef59540a6f4a9a85f6078a3620be115c73a7db8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:7eae01cf60383e523c9cd94d158a9162120a7370829a1dad20fdea6b0fd660bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:28cc10501788081eb61b5a1af35546191a92741f4f109df54c74e2b19439d0f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:9a616e37acfd120612f78043237a8541266ba34883833c9beb43f3da313661ad,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:6b1be6cd94a0942259bca5d5d2c30cc7de4a33276b61f8ae3940226772106256,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:02d2c22d15401574941fbe057095442dee0d6f7a0a9341de35d25e6a12a3fe4b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:fc3b3a36b74fd653946723c54b208072d52200635850b531e9d595a7aaea5a01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:7850ccbff320bf9a1c9c769c1c70777eb97117dd8cd5ae4435be9b4622cf807a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:397dac7e39cf40d14a986e6ec4a60fb698ca35c197d0db315b1318514cc6d1d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:1c95142a36276686e720f86423ee171dc9adcc1e89879f627545b7c906ccd9bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:e331a8fde6638e5ba154c4f0b38772a9a424f60656f2777245975fb1fa02f07d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:cd3cf7a34053e850b4d4f9f4ea4c74953a54a42fd18e47d7c01d44a88923e925,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:aee28476344fc0cc148fbe97daf9b1bfcedc22001550bba4bdc4e84be7b6989d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:cfa0b92c976603ee2a937d34013a238fcd8aa75f998e50642e33489f14124633,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:73c2f2d6eecf88acf4e45b133c8373d9bb006b530e0aff0b28f3b7420620a874,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:927b405cc04abe5ff716186e8d35e2dc5fad1c8430194659ee6617d74e4e055d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:6154d7cebd7c339afa5b86330262156171743aa5b79c2b78f9a2f378005ed8fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:e2db2f4af8d3d0be7868c6efef0189f3a2c74a8f96ae10e3f991cdf83feaef29,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:c773629df257726a6d3cacc24a6e4df0babcd7d37df04e6d14676a8da028b9c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:776211111e2e6493706dbc49a3ba44f31d1b947919313ed3a0f35810e304ec52,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:0a98e8f5c83522ca6c8e40c5e9561f6628d2d5e69f0e8a64279c541c989d3d8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:7cccf24ad0a152f90ca39893064f48a1656950ee8142685a5d482c71f0bdc9f5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:05450b48f6b5352b2686a26e933e8727748edae2ae9652d9164b7d7a1817c55a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:2346037e064861c7892690d2e8b3e1eea1a26ce3c3a11fda0b41301965bc828c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:c26c3ff9cabe3593ceb10006e782bf9391ac14785768ce9eec4f938c2d3cf228,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:daa45220bb1c47922d0917aa8fe423bb82b03a01429f1c9e37635e701e352d71,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:a80a074e227d3238bb6f285788a9e886ae7a5909ccbc5c19c93c369bdfe5b3b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:58ac66ca1be01fe0157977bd79a26cde4d0de153edfaf4162367c924826b2ef4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:99a63770d80cc7c3afa1118b400972fb0e6bff5284a2eae781b12582ad79c29c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:9ee4d84529394afcd860f1a1186484560f02f08c15c37cac42a22473b7116d5f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:ea15fadda7b0439ec637edfaf6ea5dbf3e35fb3be012c7c5a31e722c90becb11,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28jcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg_openstack-operators(9fa2db3b-611f-4907-b329-57a5610f6c50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:59 crc kubenswrapper[4858]: E1122 07:36:59.782096 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 22 07:36:59 crc kubenswrapper[4858]: E1122 07:36:59.782303 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzk2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-799cb6ffd6-w6s65_openstack-operators(31fb5cb2-a858-4c60-adb0-d876c661b634): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:00 crc kubenswrapper[4858]: E1122 07:37:00.573053 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 22 07:37:00 crc kubenswrapper[4858]: E1122 07:37:00.573433 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgzdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7bb88cb858-cndd9_openstack-operators(c59f7090-9bf4-44d0-b2f7-abc4084741d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:04 crc kubenswrapper[4858]: E1122 07:37:04.580217 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 22 07:37:04 crc kubenswrapper[4858]: E1122 07:37:04.580908 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dpj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6f8c5b86cb-c5mcb_openstack-operators(44cc6988-9131-4dc3-9cab-c871699736e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:04 crc kubenswrapper[4858]: E1122 07:37:04.582219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" podUID="44cc6988-9131-4dc3-9cab-c871699736e8" Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.254393 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.254626 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rbng2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w_openstack-operators(03950dc4-3e8b-4723-a446-da38e0918da9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.256342 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" podUID="03950dc4-3e8b-4723-a446-da38e0918da9" Nov 22 07:37:05 crc kubenswrapper[4858]: I1122 07:37:05.830601 4858 generic.go:334] "Generic (PLEG): container finished" podID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerID="d5f66f82e27fafd86d5e993e7a1aa3a56c9c13fa4f86f0b7a1c5733ca3ecfd15" exitCode=0 Nov 22 07:37:05 crc kubenswrapper[4858]: I1122 07:37:05.831616 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerDied","Data":"d5f66f82e27fafd86d5e993e7a1aa3a56c9c13fa4f86f0b7a1c5733ca3ecfd15"} Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.842739 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" podUID="03950dc4-3e8b-4723-a446-da38e0918da9" Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.864458 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.864702 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrm2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-sc76l_openstack-operators(6588ae44-3c72-4f10-ab0f-fea80227cfc8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:05 crc kubenswrapper[4858]: E1122 07:37:05.866494 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" podUID="6588ae44-3c72-4f10-ab0f-fea80227cfc8" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.137864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" podUID="23bfa545-d340-4a3f-afeb-8e292096cb33" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.242427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" podUID="ba1a065a-3e2f-41fd-9eba-761128ddfcdf" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.260147 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" podUID="b3cf7fca-3d8c-4aae-8974-0ed60d98e105" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.360857 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" podUID="34c4063b-09b1-4591-a026-0bb061649b1a" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.376490 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" podUID="032bc77f-0555-4036-9507-fa28e25f89fe" Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.380616 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwksg"] Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.389081 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b28vs"] Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.419596 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" podUID="9fa2db3b-611f-4907-b329-57a5610f6c50" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.441211 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" podUID="31fb5cb2-a858-4c60-adb0-d876c661b634" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.447286 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" podUID="db81ffbc-f748-4ece-ac50-189d6811825e" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.513101 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" podUID="ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.513458 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" podUID="920a745c-fd4f-4a5f-a242-6524b871fd64" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.513589 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" podUID="6843bd96-e987-4170-823d-3b929461a48f" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.732384 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" podUID="c59f7090-9bf4-44d0-b2f7-abc4084741d5" Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.841899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" event={"ID":"ba1a065a-3e2f-41fd-9eba-761128ddfcdf","Type":"ContainerStarted","Data":"57a0524abc0814de8a25de41f1d8e13408f8f180e4fdf2bac1a7e2bc4fa194de"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.843813 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerStarted","Data":"1f3c36ea822bf3ce05b0dd82de91b8590b6900f97a6b9e14bc3872c4208e6e68"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.846293 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" event={"ID":"920a745c-fd4f-4a5f-a242-6524b871fd64","Type":"ContainerStarted","Data":"77ba135d0d2bb70ba5189695956adb8c54e7029af2d1bbc685ca31bdd0e49c36"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.848395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" event={"ID":"31fb5cb2-a858-4c60-adb0-d876c661b634","Type":"ContainerStarted","Data":"53635dad1cecbe2fd3e5a90bcf849fdee3d94b70077e62913d6d809ee8c85a9a"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.849959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwksg" event={"ID":"c136ac8d-cfc2-4ad9-ae4c-adef04349862","Type":"ContainerStarted","Data":"4230059c997eddbc45674d32fc5c314653baa38789b30b0b41757cd6e6125a1c"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.851687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" event={"ID":"34c4063b-09b1-4591-a026-0bb061649b1a","Type":"ContainerStarted","Data":"5a22156896e460b6ba523728610f02a5b4d88f1dbf0ce70a860c8a479ee9a6c5"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.859196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" event={"ID":"c59f7090-9bf4-44d0-b2f7-abc4084741d5","Type":"ContainerStarted","Data":"44683e5ebabac37636f515ee9e07d1a38206aaafbb4e61bc168c0ab3df37dcfd"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.867824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" event={"ID":"ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5","Type":"ContainerStarted","Data":"aff9fd5307412a4d686e6cf57bc2d2c2b50fa1f2f8d790902f9be34e4140ba86"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.878117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" event={"ID":"13fd8728-3920-4cc8-a6fd-af0593936c76","Type":"ContainerStarted","Data":"c9042527eedfb64156bafd5b2e1f071ee30501ef574d87f6b772f1a034b67770"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.892987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" event={"ID":"9fa2db3b-611f-4907-b329-57a5610f6c50","Type":"ContainerStarted","Data":"a093064e3ec783a99c92e050e1f76b252b2fd93d0b7265ac27233530350d86d4"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.909041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" event={"ID":"db81ffbc-f748-4ece-ac50-189d6811825e","Type":"ContainerStarted","Data":"84fdb81b3c160fb407f4cc05bf6976d040bf73dfe0868e5c8489aad3cb6b12b9"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.917813 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" event={"ID":"23bfa545-d340-4a3f-afeb-8e292096cb33","Type":"ContainerStarted","Data":"e223ccba9c3f020bf2c3e4c7c4e5863f37fd08530b56ff544a9e35f955ab547b"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.921379 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" event={"ID":"6843bd96-e987-4170-823d-3b929461a48f","Type":"ContainerStarted","Data":"79fcff5e949ccd54d52aab3e8469cff1e236cf590757995e5ae332975a8a3c00"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.932869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" event={"ID":"032bc77f-0555-4036-9507-fa28e25f89fe","Type":"ContainerStarted","Data":"d5923c85653f88bcdd8a91ce53b11626c610543cacdb929a1e2119faf4b745fc"} Nov 22 07:37:06 crc kubenswrapper[4858]: I1122 07:37:06.936242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" event={"ID":"b3cf7fca-3d8c-4aae-8974-0ed60d98e105","Type":"ContainerStarted","Data":"2a0dd09d41f2a26c3f479be2e89d30c8e82edd9a6ddb993e9ec71f52903078c7"} Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.975683 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\"" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" podUID="ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.976194 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" podUID="c59f7090-9bf4-44d0-b2f7-abc4084741d5" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.976902 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" podUID="6843bd96-e987-4170-823d-3b929461a48f" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.976929 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" podUID="9fa2db3b-611f-4907-b329-57a5610f6c50" Nov 22 07:37:06 crc kubenswrapper[4858]: E1122 07:37:06.976940 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" podUID="31fb5cb2-a858-4c60-adb0-d876c661b634" Nov 22 07:37:07 crc kubenswrapper[4858]: I1122 07:37:07.978703 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerID="5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a" exitCode=0 Nov 22 07:37:07 crc kubenswrapper[4858]: I1122 07:37:07.980747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerDied","Data":"5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a"} Nov 22 07:37:07 crc kubenswrapper[4858]: I1122 07:37:07.996714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerStarted","Data":"02db518f686fc0cf90a0754a6a7f50eccedf66c3c34403b2966c8f02fd932af1"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.000449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" event={"ID":"450117a8-48ee-4b20-8588-a38daf6ff303","Type":"ContainerStarted","Data":"bb4ca0ea98b80be3a9b6d11971c039014518e32b66c53b1d3f278fd51f920af5"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.007496 4858 generic.go:334] "Generic (PLEG): container finished" podID="c136ac8d-cfc2-4ad9-ae4c-adef04349862" containerID="1f46f8c56dcaf8fc382bcf7470a2931ee35b12305430e56519320a31dea66a9e" exitCode=0 Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.007590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwksg" event={"ID":"c136ac8d-cfc2-4ad9-ae4c-adef04349862","Type":"ContainerDied","Data":"1f46f8c56dcaf8fc382bcf7470a2931ee35b12305430e56519320a31dea66a9e"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.009757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" event={"ID":"e60163a5-437a-4647-ba1e-ccb800dc2d30","Type":"ContainerStarted","Data":"08c33502de60d13774857b616a92ed8489bb38dff27904a17cb5c1f821cbe036"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.021603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" event={"ID":"0e34d0d6-23cf-4ebc-ac61-72290f0dbb49","Type":"ContainerStarted","Data":"7ea49b479a01edc4a8530324d6707484472fb7989ef10ae4c7ff805764724851"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.028896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" event={"ID":"9b045d27-9e4d-4615-a7f0-e590d259bab4","Type":"ContainerStarted","Data":"0bf32c4a4d2a70d6c665cea02e7b28bd1b5c8271aeb75dcadda8de5e15c55722"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.034418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" event={"ID":"a9c2e231-a0b4-4125-9a52-53c0012e33df","Type":"ContainerStarted","Data":"d534d52b52ddb23075f70884c99d5e8ae7f9c92fb8dbcafd41ba1692f7c7aa67"} Nov 22 07:37:08 crc kubenswrapper[4858]: I1122 07:37:08.041996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" event={"ID":"32923d04-1afb-482f-b8f2-30dbe60e166f","Type":"ContainerStarted","Data":"85f6f7ff79692a584ad2378aecd090968c1a33e5b3330a5142544178c9d1c7d4"} Nov 22 07:37:08 crc kubenswrapper[4858]: E1122 07:37:08.043358 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" podUID="31fb5cb2-a858-4c60-adb0-d876c661b634" Nov 22 07:37:08 crc kubenswrapper[4858]: E1122 07:37:08.043864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\"" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" podUID="ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5" Nov 22 07:37:08 crc kubenswrapper[4858]: E1122 07:37:08.048462 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" podUID="9fa2db3b-611f-4907-b329-57a5610f6c50" Nov 22 07:37:08 crc kubenswrapper[4858]: E1122 07:37:08.048498 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" podUID="c59f7090-9bf4-44d0-b2f7-abc4084741d5" Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.059778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" event={"ID":"13fd8728-3920-4cc8-a6fd-af0593936c76","Type":"ContainerStarted","Data":"22827f50dbf5d5b87af4b08d634bb3c836079cab67d0a8744c2cb42e969af566"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.060420 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.062166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" event={"ID":"9b045d27-9e4d-4615-a7f0-e590d259bab4","Type":"ContainerStarted","Data":"110828d0cbbb9ea8ea823eee786c28db699815dc9a205d07200cdc8705ad6e99"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.062332 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.066017 4858 generic.go:334] "Generic (PLEG): container finished" podID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerID="02db518f686fc0cf90a0754a6a7f50eccedf66c3c34403b2966c8f02fd932af1" exitCode=0 Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.066118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerDied","Data":"02db518f686fc0cf90a0754a6a7f50eccedf66c3c34403b2966c8f02fd932af1"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.069699 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" event={"ID":"450117a8-48ee-4b20-8588-a38daf6ff303","Type":"ContainerStarted","Data":"93a735f283bfd0ef4b026895dc892cef89cd63adf9a0234c7c8c8ae467c17c10"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.072067 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" event={"ID":"a9c2e231-a0b4-4125-9a52-53c0012e33df","Type":"ContainerStarted","Data":"3809613a314230a3945f7ca7861800324ff5907c502265e8363917557c9a0cbc"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.074934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" event={"ID":"32923d04-1afb-482f-b8f2-30dbe60e166f","Type":"ContainerStarted","Data":"ecf56e5b4a93f01db848ea5a792172cd9883e72c685835b5c944d3d485e45f1f"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.075813 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.080553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" event={"ID":"0e34d0d6-23cf-4ebc-ac61-72290f0dbb49","Type":"ContainerStarted","Data":"5db3e34f000a66e76372796de0638da0093f6f5d2ef8e63d9af20c489c82834f"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.082836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" event={"ID":"e60163a5-437a-4647-ba1e-ccb800dc2d30","Type":"ContainerStarted","Data":"9ae3030d3afa50855a0f72cb051668f78830b3e5a41c199b1d7ff623b6aeed32"} Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.093600 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" podStartSLOduration=9.082507911 podStartE2EDuration="56.09357364s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:18.268221885 +0000 UTC m=+1540.109644891" lastFinishedPulling="2025-11-22 07:37:05.279287614 +0000 UTC m=+1587.120710620" observedRunningTime="2025-11-22 07:37:10.088805239 +0000 UTC m=+1591.930228245" watchObservedRunningTime="2025-11-22 07:37:10.09357364 +0000 UTC m=+1591.934996646" Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.163909 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" podStartSLOduration=8.881891756 podStartE2EDuration="56.163885093s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.996576843 +0000 UTC m=+1539.837999849" lastFinishedPulling="2025-11-22 07:37:05.27857018 +0000 UTC m=+1587.119993186" observedRunningTime="2025-11-22 07:37:10.151039862 +0000 UTC m=+1591.992462888" watchObservedRunningTime="2025-11-22 07:37:10.163885093 +0000 UTC m=+1592.005308099" Nov 22 07:37:10 crc kubenswrapper[4858]: I1122 07:37:10.189172 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" podStartSLOduration=7.585119276 podStartE2EDuration="56.189145778s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:16.668604409 +0000 UTC m=+1538.510027415" lastFinishedPulling="2025-11-22 07:37:05.272630911 +0000 UTC m=+1587.114053917" observedRunningTime="2025-11-22 07:37:10.176613808 +0000 UTC m=+1592.018036804" watchObservedRunningTime="2025-11-22 07:37:10.189145778 +0000 UTC m=+1592.030568784" Nov 22 07:37:11 crc kubenswrapper[4858]: I1122 07:37:11.092686 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-m2b42" Nov 22 07:37:11 crc kubenswrapper[4858]: I1122 07:37:11.112203 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" podStartSLOduration=9.306016128 podStartE2EDuration="57.112178589s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.991037017 +0000 UTC m=+1539.832460023" lastFinishedPulling="2025-11-22 07:37:05.797199478 +0000 UTC m=+1587.638622484" observedRunningTime="2025-11-22 07:37:11.105218238 +0000 UTC m=+1592.946641254" watchObservedRunningTime="2025-11-22 07:37:11.112178589 +0000 UTC m=+1592.953601595" Nov 22 07:37:11 crc kubenswrapper[4858]: I1122 07:37:11.129916 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" podStartSLOduration=9.767667868 podStartE2EDuration="57.129894994s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:18.434867228 +0000 UTC m=+1540.276290234" lastFinishedPulling="2025-11-22 07:37:05.797094354 +0000 UTC m=+1587.638517360" observedRunningTime="2025-11-22 07:37:11.124931586 +0000 UTC m=+1592.966354592" watchObservedRunningTime="2025-11-22 07:37:11.129894994 +0000 UTC m=+1592.971318000" Nov 22 07:37:11 crc kubenswrapper[4858]: I1122 07:37:11.196456 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" podStartSLOduration=7.887061005 podStartE2EDuration="57.196429986s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:15.970271393 +0000 UTC m=+1537.811694399" lastFinishedPulling="2025-11-22 07:37:05.279640374 +0000 UTC m=+1587.121063380" observedRunningTime="2025-11-22 07:37:11.172218544 +0000 UTC m=+1593.013641570" watchObservedRunningTime="2025-11-22 07:37:11.196429986 +0000 UTC m=+1593.037852992" Nov 22 07:37:11 crc kubenswrapper[4858]: I1122 07:37:11.197399 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" podStartSLOduration=9.338116762 podStartE2EDuration="57.197393017s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.937992555 +0000 UTC m=+1539.779415561" lastFinishedPulling="2025-11-22 07:37:05.79726881 +0000 UTC m=+1587.638691816" observedRunningTime="2025-11-22 07:37:11.188768301 +0000 UTC m=+1593.030191327" watchObservedRunningTime="2025-11-22 07:37:11.197393017 +0000 UTC m=+1593.038816023" Nov 22 07:37:12 crc kubenswrapper[4858]: I1122 07:37:12.100816 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-pwftd" Nov 22 07:37:14 crc kubenswrapper[4858]: I1122 07:37:14.660311 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:37:14 crc kubenswrapper[4858]: I1122 07:37:14.665315 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-qplxm" Nov 22 07:37:14 crc kubenswrapper[4858]: I1122 07:37:14.805229 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-2tz7w" Nov 22 07:37:15 crc kubenswrapper[4858]: I1122 07:37:15.197968 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:37:15 crc kubenswrapper[4858]: I1122 07:37:15.200992 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-l65cp" Nov 22 07:37:15 crc kubenswrapper[4858]: I1122 07:37:15.312250 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:37:15 crc kubenswrapper[4858]: I1122 07:37:15.312334 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:37:15 crc kubenswrapper[4858]: I1122 07:37:15.546299 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:37:15 crc kubenswrapper[4858]: I1122 07:37:15.546448 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-kpjqt" Nov 22 07:37:16 crc kubenswrapper[4858]: I1122 07:37:16.145666 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:37:16 crc kubenswrapper[4858]: I1122 07:37:16.147884 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-tg7dz" Nov 22 07:37:22 crc kubenswrapper[4858]: E1122 07:37:22.538521 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" podUID="6588ae44-3c72-4f10-ab0f-fea80227cfc8" Nov 22 07:37:23 crc kubenswrapper[4858]: E1122 07:37:23.922802 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" podUID="44cc6988-9131-4dc3-9cab-c871699736e8" Nov 22 07:37:45 crc kubenswrapper[4858]: I1122 07:37:45.311920 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:37:45 crc kubenswrapper[4858]: I1122 07:37:45.312560 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:37:49 crc kubenswrapper[4858]: E1122 07:37:49.075878 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 07:37:49 crc kubenswrapper[4858]: E1122 07:37:49.077601 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jc7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dwksg_openshift-marketplace(c136ac8d-cfc2-4ad9-ae4c-adef04349862): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:37:49 crc kubenswrapper[4858]: E1122 07:37:49.078909 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dwksg" podUID="c136ac8d-cfc2-4ad9-ae4c-adef04349862" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.437209 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" event={"ID":"31fb5cb2-a858-4c60-adb0-d876c661b634","Type":"ContainerStarted","Data":"89c2999e64a9ce12a8ca7d96b7d6d984a2f3d95e8f3a7ecf5902e7e5ae9e36a5"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.437804 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.438740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" event={"ID":"23bfa545-d340-4a3f-afeb-8e292096cb33","Type":"ContainerStarted","Data":"6cefd678247617c98b1230b85c26588290790216cf8f1dc65b6ebfd3e7d36e73"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.438917 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.440231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" event={"ID":"34c4063b-09b1-4591-a026-0bb061649b1a","Type":"ContainerStarted","Data":"dc2fa7eb6f0af1dff975f546d4819dc83e63a1c8a4bdc09c92c43a8096bb413c"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.440759 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.450376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" event={"ID":"6588ae44-3c72-4f10-ab0f-fea80227cfc8","Type":"ContainerStarted","Data":"134d5714bb2511ca4cd9c5184fbeaced8cd63dc679a04f590794837adc1a890e"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.450777 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.452811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" event={"ID":"ba1a065a-3e2f-41fd-9eba-761128ddfcdf","Type":"ContainerStarted","Data":"df8a83ef4489dc98ec356a334a20ee13c3b6b879558884dc8152de0da4a849ab"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.452946 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.454996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" event={"ID":"db81ffbc-f748-4ece-ac50-189d6811825e","Type":"ContainerStarted","Data":"e488a356dfae78134eb47b57ce14917da149518a575d7064feda50b30002962c"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.455600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.474206 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" event={"ID":"03950dc4-3e8b-4723-a446-da38e0918da9","Type":"ContainerStarted","Data":"f7aeb02580a92b740bb65944ba29dbee15e11400e2c488b21e63b27d735b74c1"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.477944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" event={"ID":"c59f7090-9bf4-44d0-b2f7-abc4084741d5","Type":"ContainerStarted","Data":"2e5de82cd48e07e5a49a6817acab446ee0208102f3edb762e58b907eda225bde"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.478665 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.480658 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" event={"ID":"ef539dfd-5e5d-4fe7-97e7-2d98f8e4c4a5","Type":"ContainerStarted","Data":"3409e845330705ee1f1fde473b669e320bcb1b71c21b467ead6681114eac4e27"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.481264 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.483727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" event={"ID":"920a745c-fd4f-4a5f-a242-6524b871fd64","Type":"ContainerStarted","Data":"c042ed5682d51c83dfcbda937b5f8b1eefe14511dcc0aa9177b1bb4c8db73865"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.484515 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.487088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" event={"ID":"6843bd96-e987-4170-823d-3b929461a48f","Type":"ContainerStarted","Data":"47b8c4741fc52f2dd40dbc0115c51be0eb805781143c91f2b32e09c66fddd116"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.487789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.489904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" event={"ID":"9fa2db3b-611f-4907-b329-57a5610f6c50","Type":"ContainerStarted","Data":"52a63d113cee9d8cc67c14fb5ca8fb528b0faa9a715c0decc11235497d6b2908"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.490609 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.492588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" event={"ID":"44cc6988-9131-4dc3-9cab-c871699736e8","Type":"ContainerStarted","Data":"d700ccf6bfaf8b39a99d94c59ce36167d746c3eda097d080eeec41f25c9a981e"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.493150 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.496739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerStarted","Data":"1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.498907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" event={"ID":"032bc77f-0555-4036-9507-fa28e25f89fe","Type":"ContainerStarted","Data":"b64363fbf9b2e6d1c78d4e7349718dea15e442956ecfa9163e851fb8577e1298"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.499717 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.509653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerStarted","Data":"dd5e8b9f6af1d62d077a60350458d48cb6e69e34e46b34e41fd81403f0e140ed"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.525868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" event={"ID":"b3cf7fca-3d8c-4aae-8974-0ed60d98e105","Type":"ContainerStarted","Data":"e64183e67ad3bcdfc8ad29791dad95883535009bcd772ba56cab2f8935a106f4"} Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.526054 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.623189 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" podStartSLOduration=5.5078569250000005 podStartE2EDuration="1m39.623159774s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.984613982 +0000 UTC m=+1539.826036988" lastFinishedPulling="2025-11-22 07:37:52.099916821 +0000 UTC m=+1633.941339837" observedRunningTime="2025-11-22 07:37:53.622037379 +0000 UTC m=+1635.463460405" watchObservedRunningTime="2025-11-22 07:37:53.623159774 +0000 UTC m=+1635.464582780" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.624528 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" podStartSLOduration=5.506670898 podStartE2EDuration="1m39.624516778s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.981939556 +0000 UTC m=+1539.823362562" lastFinishedPulling="2025-11-22 07:37:52.099785436 +0000 UTC m=+1633.941208442" observedRunningTime="2025-11-22 07:37:53.509518413 +0000 UTC m=+1635.350941429" watchObservedRunningTime="2025-11-22 07:37:53.624516778 +0000 UTC m=+1635.465939784" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.678309 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" podStartSLOduration=5.922248344 podStartE2EDuration="1m39.678290151s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:16.417695859 +0000 UTC m=+1538.259118865" lastFinishedPulling="2025-11-22 07:37:50.173737656 +0000 UTC m=+1632.015160672" observedRunningTime="2025-11-22 07:37:53.671951348 +0000 UTC m=+1635.513374384" watchObservedRunningTime="2025-11-22 07:37:53.678290151 +0000 UTC m=+1635.519713157" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.718400 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" podStartSLOduration=5.615217243 podStartE2EDuration="1m39.718380265s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.996920724 +0000 UTC m=+1539.838343730" lastFinishedPulling="2025-11-22 07:37:52.100083756 +0000 UTC m=+1633.941506752" observedRunningTime="2025-11-22 07:37:53.715883706 +0000 UTC m=+1635.557306742" watchObservedRunningTime="2025-11-22 07:37:53.718380265 +0000 UTC m=+1635.559803271" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.824048 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" podStartSLOduration=5.704957169 podStartE2EDuration="1m39.824019841s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:18.060218282 +0000 UTC m=+1539.901641288" lastFinishedPulling="2025-11-22 07:37:52.179280954 +0000 UTC m=+1634.020703960" observedRunningTime="2025-11-22 07:37:53.756161207 +0000 UTC m=+1635.597584233" watchObservedRunningTime="2025-11-22 07:37:53.824019841 +0000 UTC m=+1635.665442847" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.827448 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mgf9w" podStartSLOduration=4.686701883 podStartE2EDuration="1m38.82742452s" podCreationTimestamp="2025-11-22 07:36:15 +0000 UTC" firstStartedPulling="2025-11-22 07:36:18.007974247 +0000 UTC m=+1539.849397253" lastFinishedPulling="2025-11-22 07:37:52.148696884 +0000 UTC m=+1633.990119890" observedRunningTime="2025-11-22 07:37:53.812505122 +0000 UTC m=+1635.653928138" watchObservedRunningTime="2025-11-22 07:37:53.82742452 +0000 UTC m=+1635.668847536" Nov 22 07:37:53 crc kubenswrapper[4858]: I1122 07:37:53.858780 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" podStartSLOduration=5.298321999 podStartE2EDuration="1m39.858761274s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.427747255 +0000 UTC m=+1539.269170271" lastFinishedPulling="2025-11-22 07:37:51.98818654 +0000 UTC m=+1633.829609546" observedRunningTime="2025-11-22 07:37:53.851385288 +0000 UTC m=+1635.692808314" watchObservedRunningTime="2025-11-22 07:37:53.858761274 +0000 UTC m=+1635.700184280" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.004112 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" podStartSLOduration=7.471992963 podStartE2EDuration="1m40.00405464s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.937482839 +0000 UTC m=+1539.778905845" lastFinishedPulling="2025-11-22 07:37:50.469544516 +0000 UTC m=+1632.310967522" observedRunningTime="2025-11-22 07:37:54.001130456 +0000 UTC m=+1635.842553482" watchObservedRunningTime="2025-11-22 07:37:54.00405464 +0000 UTC m=+1635.845477646" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.006764 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" podStartSLOduration=33.317261017 podStartE2EDuration="1m40.006743316s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:19.195252125 +0000 UTC m=+1541.036675131" lastFinishedPulling="2025-11-22 07:37:25.884734424 +0000 UTC m=+1607.726157430" observedRunningTime="2025-11-22 07:37:53.94661776 +0000 UTC m=+1635.788040766" watchObservedRunningTime="2025-11-22 07:37:54.006743316 +0000 UTC m=+1635.848166322" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.059238 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" podStartSLOduration=7.097558511 podStartE2EDuration="1m40.059202387s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.510464022 +0000 UTC m=+1539.351887028" lastFinishedPulling="2025-11-22 07:37:50.472107898 +0000 UTC m=+1632.313530904" observedRunningTime="2025-11-22 07:37:54.04838077 +0000 UTC m=+1635.889803796" watchObservedRunningTime="2025-11-22 07:37:54.059202387 +0000 UTC m=+1635.900625393" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.166934 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" podStartSLOduration=7.673678114 podStartE2EDuration="1m40.166903569s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:19.714402648 +0000 UTC m=+1541.555825654" lastFinishedPulling="2025-11-22 07:37:52.207628103 +0000 UTC m=+1634.049051109" observedRunningTime="2025-11-22 07:37:54.155311067 +0000 UTC m=+1635.996734073" watchObservedRunningTime="2025-11-22 07:37:54.166903569 +0000 UTC m=+1636.008326575" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.273271 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" podStartSLOduration=7.602673749 podStartE2EDuration="1m40.273247797s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:18.014929478 +0000 UTC m=+1539.856352484" lastFinishedPulling="2025-11-22 07:37:50.685503526 +0000 UTC m=+1632.526926532" observedRunningTime="2025-11-22 07:37:54.270481738 +0000 UTC m=+1636.111904744" watchObservedRunningTime="2025-11-22 07:37:54.273247797 +0000 UTC m=+1636.114670803" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.397840 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" podStartSLOduration=6.318536776 podStartE2EDuration="1m40.397817219s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:18.098691419 +0000 UTC m=+1539.940114425" lastFinishedPulling="2025-11-22 07:37:52.177971862 +0000 UTC m=+1634.019394868" observedRunningTime="2025-11-22 07:37:54.393547302 +0000 UTC m=+1636.234970328" watchObservedRunningTime="2025-11-22 07:37:54.397817219 +0000 UTC m=+1636.239240225" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.399149 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" podStartSLOduration=6.485366367 podStartE2EDuration="1m40.399140311s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:16.555613017 +0000 UTC m=+1538.397036023" lastFinishedPulling="2025-11-22 07:37:50.469386961 +0000 UTC m=+1632.310809967" observedRunningTime="2025-11-22 07:37:54.339887692 +0000 UTC m=+1636.181310718" watchObservedRunningTime="2025-11-22 07:37:54.399140311 +0000 UTC m=+1636.240563317" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.480940 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sq482" podStartSLOduration=47.712058271 podStartE2EDuration="1m32.480912371s" podCreationTimestamp="2025-11-22 07:36:22 +0000 UTC" firstStartedPulling="2025-11-22 07:37:05.916664697 +0000 UTC m=+1587.758087703" lastFinishedPulling="2025-11-22 07:37:50.685518797 +0000 UTC m=+1632.526941803" observedRunningTime="2025-11-22 07:37:54.479082913 +0000 UTC m=+1636.320505929" watchObservedRunningTime="2025-11-22 07:37:54.480912371 +0000 UTC m=+1636.322335377" Nov 22 07:37:54 crc kubenswrapper[4858]: I1122 07:37:54.486118 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" podStartSLOduration=33.573330406 podStartE2EDuration="1m40.486097857s" podCreationTimestamp="2025-11-22 07:36:14 +0000 UTC" firstStartedPulling="2025-11-22 07:36:17.904677433 +0000 UTC m=+1539.746100439" lastFinishedPulling="2025-11-22 07:37:24.817444874 +0000 UTC m=+1606.658867890" observedRunningTime="2025-11-22 07:37:54.432384276 +0000 UTC m=+1636.273807292" watchObservedRunningTime="2025-11-22 07:37:54.486097857 +0000 UTC m=+1636.327520863" Nov 22 07:37:55 crc kubenswrapper[4858]: I1122 07:37:55.557566 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerID="1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd" exitCode=0 Nov 22 07:37:55 crc kubenswrapper[4858]: I1122 07:37:55.557690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerDied","Data":"1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd"} Nov 22 07:37:56 crc kubenswrapper[4858]: I1122 07:37:56.576399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerStarted","Data":"3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8"} Nov 22 07:37:56 crc kubenswrapper[4858]: I1122 07:37:56.599982 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b28vs" podStartSLOduration=29.494012068 podStartE2EDuration="1m17.599956557s" podCreationTimestamp="2025-11-22 07:36:39 +0000 UTC" firstStartedPulling="2025-11-22 07:37:07.981818655 +0000 UTC m=+1589.823241671" lastFinishedPulling="2025-11-22 07:37:56.087763154 +0000 UTC m=+1637.929186160" observedRunningTime="2025-11-22 07:37:56.59723014 +0000 UTC m=+1638.438653156" watchObservedRunningTime="2025-11-22 07:37:56.599956557 +0000 UTC m=+1638.441379573" Nov 22 07:37:58 crc kubenswrapper[4858]: I1122 07:37:58.488752 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-499nc" Nov 22 07:37:58 crc kubenswrapper[4858]: I1122 07:37:58.876222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44rvwdg" Nov 22 07:37:59 crc kubenswrapper[4858]: I1122 07:37:59.526551 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:37:59 crc kubenswrapper[4858]: I1122 07:37:59.527214 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:38:00 crc kubenswrapper[4858]: I1122 07:38:00.570037 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b28vs" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="registry-server" probeResult="failure" output=< Nov 22 07:38:00 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:38:00 crc kubenswrapper[4858]: > Nov 22 07:38:02 crc kubenswrapper[4858]: I1122 07:38:02.427361 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:38:02 crc kubenswrapper[4858]: I1122 07:38:02.427492 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:38:02 crc kubenswrapper[4858]: I1122 07:38:02.491270 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:38:02 crc kubenswrapper[4858]: I1122 07:38:02.664293 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:38:02 crc kubenswrapper[4858]: I1122 07:38:02.722739 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq482"] Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.636129 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sq482" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="registry-server" containerID="cri-o://dd5e8b9f6af1d62d077a60350458d48cb6e69e34e46b34e41fd81403f0e140ed" gracePeriod=2 Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.683067 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-7mpq7" Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.727083 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.874399 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-7qg4d" Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.875303 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-gb8kr" Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.875883 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-cndd9" Nov 22 07:38:04 crc kubenswrapper[4858]: I1122 07:38:04.896309 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-spzd8" Nov 22 07:38:05 crc kubenswrapper[4858]: I1122 07:38:05.123042 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-6m467" Nov 22 07:38:05 crc kubenswrapper[4858]: I1122 07:38:05.127858 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-c5mcb" Nov 22 07:38:05 crc kubenswrapper[4858]: I1122 07:38:05.175053 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-8njw2" Nov 22 07:38:05 crc kubenswrapper[4858]: I1122 07:38:05.376080 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-q2z78" Nov 22 07:38:05 crc kubenswrapper[4858]: I1122 07:38:05.739218 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-w6s65" Nov 22 07:38:05 crc kubenswrapper[4858]: I1122 07:38:05.771373 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sc76l" Nov 22 07:38:06 crc kubenswrapper[4858]: I1122 07:38:06.653110 4858 generic.go:334] "Generic (PLEG): container finished" podID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerID="dd5e8b9f6af1d62d077a60350458d48cb6e69e34e46b34e41fd81403f0e140ed" exitCode=0 Nov 22 07:38:06 crc kubenswrapper[4858]: I1122 07:38:06.653151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerDied","Data":"dd5e8b9f6af1d62d077a60350458d48cb6e69e34e46b34e41fd81403f0e140ed"} Nov 22 07:38:09 crc kubenswrapper[4858]: I1122 07:38:09.574597 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:38:09 crc kubenswrapper[4858]: I1122 07:38:09.623362 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:38:09 crc kubenswrapper[4858]: I1122 07:38:09.897799 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.036135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-catalog-content\") pod \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.036245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgfmq\" (UniqueName: \"kubernetes.io/projected/c703fd8c-fe17-4db6-a2d5-e790e620acd0-kube-api-access-wgfmq\") pod \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.036287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-utilities\") pod \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\" (UID: \"c703fd8c-fe17-4db6-a2d5-e790e620acd0\") " Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.038433 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-utilities" (OuterVolumeSpecName: "utilities") pod "c703fd8c-fe17-4db6-a2d5-e790e620acd0" (UID: "c703fd8c-fe17-4db6-a2d5-e790e620acd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.044750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c703fd8c-fe17-4db6-a2d5-e790e620acd0-kube-api-access-wgfmq" (OuterVolumeSpecName: "kube-api-access-wgfmq") pod "c703fd8c-fe17-4db6-a2d5-e790e620acd0" (UID: "c703fd8c-fe17-4db6-a2d5-e790e620acd0"). InnerVolumeSpecName "kube-api-access-wgfmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.092971 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c703fd8c-fe17-4db6-a2d5-e790e620acd0" (UID: "c703fd8c-fe17-4db6-a2d5-e790e620acd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.138463 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgfmq\" (UniqueName: \"kubernetes.io/projected/c703fd8c-fe17-4db6-a2d5-e790e620acd0-kube-api-access-wgfmq\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.138505 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.138515 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703fd8c-fe17-4db6-a2d5-e790e620acd0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.687751 4858 generic.go:334] "Generic (PLEG): container finished" podID="c136ac8d-cfc2-4ad9-ae4c-adef04349862" containerID="a0ef417ff8b9dad51ebfc9d25203c2de5071fa1e9cedea57b82a04cee1e9697d" exitCode=0 Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.687842 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwksg" event={"ID":"c136ac8d-cfc2-4ad9-ae4c-adef04349862","Type":"ContainerDied","Data":"a0ef417ff8b9dad51ebfc9d25203c2de5071fa1e9cedea57b82a04cee1e9697d"} Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.691639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq482" event={"ID":"c703fd8c-fe17-4db6-a2d5-e790e620acd0","Type":"ContainerDied","Data":"e2303519159ff59cada2ca66c503fad7bbe49d5874665964258d37d3f608764b"} Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.691716 4858 scope.go:117] "RemoveContainer" containerID="dd5e8b9f6af1d62d077a60350458d48cb6e69e34e46b34e41fd81403f0e140ed" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.692063 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq482" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.714289 4858 scope.go:117] "RemoveContainer" containerID="02db518f686fc0cf90a0754a6a7f50eccedf66c3c34403b2966c8f02fd932af1" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.737536 4858 scope.go:117] "RemoveContainer" containerID="d5f66f82e27fafd86d5e993e7a1aa3a56c9c13fa4f86f0b7a1c5733ca3ecfd15" Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.739451 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq482"] Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.746659 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sq482"] Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.833975 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b28vs"] Nov 22 07:38:10 crc kubenswrapper[4858]: I1122 07:38:10.834657 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b28vs" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="registry-server" containerID="cri-o://3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8" gracePeriod=2 Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.403303 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.457861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j689\" (UniqueName: \"kubernetes.io/projected/f4af3305-c68f-4034-ac7d-9002c41a711a-kube-api-access-5j689\") pod \"f4af3305-c68f-4034-ac7d-9002c41a711a\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.458746 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-catalog-content\") pod \"f4af3305-c68f-4034-ac7d-9002c41a711a\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.458896 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-utilities\") pod \"f4af3305-c68f-4034-ac7d-9002c41a711a\" (UID: \"f4af3305-c68f-4034-ac7d-9002c41a711a\") " Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.459817 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-utilities" (OuterVolumeSpecName: "utilities") pod "f4af3305-c68f-4034-ac7d-9002c41a711a" (UID: "f4af3305-c68f-4034-ac7d-9002c41a711a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.464400 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4af3305-c68f-4034-ac7d-9002c41a711a-kube-api-access-5j689" (OuterVolumeSpecName: "kube-api-access-5j689") pod "f4af3305-c68f-4034-ac7d-9002c41a711a" (UID: "f4af3305-c68f-4034-ac7d-9002c41a711a"). InnerVolumeSpecName "kube-api-access-5j689". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.555524 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" path="/var/lib/kubelet/pods/c703fd8c-fe17-4db6-a2d5-e790e620acd0/volumes" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.557555 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4af3305-c68f-4034-ac7d-9002c41a711a" (UID: "f4af3305-c68f-4034-ac7d-9002c41a711a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.560627 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.560667 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j689\" (UniqueName: \"kubernetes.io/projected/f4af3305-c68f-4034-ac7d-9002c41a711a-kube-api-access-5j689\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.560677 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4af3305-c68f-4034-ac7d-9002c41a711a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.704705 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerID="3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8" exitCode=0 Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.704781 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b28vs" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.704803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerDied","Data":"3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8"} Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.706116 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b28vs" event={"ID":"f4af3305-c68f-4034-ac7d-9002c41a711a","Type":"ContainerDied","Data":"1f3c36ea822bf3ce05b0dd82de91b8590b6900f97a6b9e14bc3872c4208e6e68"} Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.706268 4858 scope.go:117] "RemoveContainer" containerID="3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.739142 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b28vs"] Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.744813 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b28vs"] Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.773626 4858 scope.go:117] "RemoveContainer" containerID="1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.801021 4858 scope.go:117] "RemoveContainer" containerID="5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.817340 4858 scope.go:117] "RemoveContainer" containerID="3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8" Nov 22 07:38:11 crc kubenswrapper[4858]: E1122 07:38:11.817934 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8\": container with ID starting with 3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8 not found: ID does not exist" containerID="3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.817979 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8"} err="failed to get container status \"3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8\": rpc error: code = NotFound desc = could not find container \"3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8\": container with ID starting with 3d28e72ac7759e673eaa6193c9edc8172243abcd3288693791ca51012c26ffb8 not found: ID does not exist" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.818015 4858 scope.go:117] "RemoveContainer" containerID="1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd" Nov 22 07:38:11 crc kubenswrapper[4858]: E1122 07:38:11.818443 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd\": container with ID starting with 1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd not found: ID does not exist" containerID="1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.818482 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd"} err="failed to get container status \"1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd\": rpc error: code = NotFound desc = could not find container \"1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd\": container with ID starting with 1f3de03731e92719dd843cb1924c6fa47c79d885fe1a99929e883d86d6a392cd not found: ID does not exist" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.818534 4858 scope.go:117] "RemoveContainer" containerID="5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a" Nov 22 07:38:11 crc kubenswrapper[4858]: E1122 07:38:11.818823 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a\": container with ID starting with 5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a not found: ID does not exist" containerID="5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a" Nov 22 07:38:11 crc kubenswrapper[4858]: I1122 07:38:11.818857 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a"} err="failed to get container status \"5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a\": rpc error: code = NotFound desc = could not find container \"5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a\": container with ID starting with 5e83a671ca8d4a4de6424550a8ce2ce9055726ce839e302fed68ad7ea9b3684a not found: ID does not exist" Nov 22 07:38:12 crc kubenswrapper[4858]: I1122 07:38:12.719199 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwksg" event={"ID":"c136ac8d-cfc2-4ad9-ae4c-adef04349862","Type":"ContainerStarted","Data":"d8f74a92e6d59faf8da0f2a37374df611e9baab0d0b420a47d8616b9ef76d472"} Nov 22 07:38:12 crc kubenswrapper[4858]: I1122 07:38:12.741609 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwksg" podStartSLOduration=27.052995707 podStartE2EDuration="1m28.741586265s" podCreationTimestamp="2025-11-22 07:36:44 +0000 UTC" firstStartedPulling="2025-11-22 07:37:10.085251325 +0000 UTC m=+1591.926674331" lastFinishedPulling="2025-11-22 07:38:11.773841883 +0000 UTC m=+1653.615264889" observedRunningTime="2025-11-22 07:38:12.73581959 +0000 UTC m=+1654.577242596" watchObservedRunningTime="2025-11-22 07:38:12.741586265 +0000 UTC m=+1654.583009271" Nov 22 07:38:13 crc kubenswrapper[4858]: I1122 07:38:13.546057 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" path="/var/lib/kubelet/pods/f4af3305-c68f-4034-ac7d-9002c41a711a/volumes" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.195012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.195079 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.237113 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.311888 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.311953 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.311994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.312559 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.312618 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" gracePeriod=600 Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.744420 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" exitCode=0 Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.744735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd"} Nov 22 07:38:15 crc kubenswrapper[4858]: I1122 07:38:15.744803 4858 scope.go:117] "RemoveContainer" containerID="fe678c03c05ee8081bf195d77b88472f1f4c9e342fe01dac378eda1f29d2452e" Nov 22 07:38:16 crc kubenswrapper[4858]: E1122 07:38:16.429632 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:38:16 crc kubenswrapper[4858]: I1122 07:38:16.753375 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:38:16 crc kubenswrapper[4858]: E1122 07:38:16.753647 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.044882 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6pt7"] Nov 22 07:38:22 crc kubenswrapper[4858]: E1122 07:38:22.045749 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="extract-utilities" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.045766 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="extract-utilities" Nov 22 07:38:22 crc kubenswrapper[4858]: E1122 07:38:22.045787 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="extract-utilities" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.045793 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="extract-utilities" Nov 22 07:38:22 crc kubenswrapper[4858]: E1122 07:38:22.045803 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="extract-content" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.045826 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="extract-content" Nov 22 07:38:22 crc kubenswrapper[4858]: E1122 07:38:22.045839 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="registry-server" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.045845 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="registry-server" Nov 22 07:38:22 crc kubenswrapper[4858]: E1122 07:38:22.045859 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="registry-server" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.045864 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="registry-server" Nov 22 07:38:22 crc kubenswrapper[4858]: E1122 07:38:22.045876 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="extract-content" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.045881 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="extract-content" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.046038 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4af3305-c68f-4034-ac7d-9002c41a711a" containerName="registry-server" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.046052 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c703fd8c-fe17-4db6-a2d5-e790e620acd0" containerName="registry-server" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.047299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.061454 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6pt7"] Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.110904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-utilities\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.111006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxtj8\" (UniqueName: \"kubernetes.io/projected/a0640fee-cce7-4095-b941-fefa6fd90c76-kube-api-access-xxtj8\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.111064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-catalog-content\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.211949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-catalog-content\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.212343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-utilities\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.212488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxtj8\" (UniqueName: \"kubernetes.io/projected/a0640fee-cce7-4095-b941-fefa6fd90c76-kube-api-access-xxtj8\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.212540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-catalog-content\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.212766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-utilities\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.231764 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxtj8\" (UniqueName: \"kubernetes.io/projected/a0640fee-cce7-4095-b941-fefa6fd90c76-kube-api-access-xxtj8\") pod \"redhat-marketplace-n6pt7\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.368790 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.719811 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6pt7"] Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.807725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6pt7" event={"ID":"a0640fee-cce7-4095-b941-fefa6fd90c76","Type":"ContainerStarted","Data":"5ed57f7ced2995ebdfb63a1c560c2e2785c08c16ddf0e31146bf8cd5bfc48beb"} Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.811086 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-z56qw"] Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.813108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.817926 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.818172 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.818372 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-hcn5j" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.818546 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.829511 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-z56qw"] Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.914573 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-tp2jz"] Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.922023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/352f31b6-c8d5-4178-82b5-f36d2d341431-config\") pod \"dnsmasq-dns-7bdd77c89-z56qw\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.922124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jpx\" (UniqueName: \"kubernetes.io/projected/352f31b6-c8d5-4178-82b5-f36d2d341431-kube-api-access-n5jpx\") pod \"dnsmasq-dns-7bdd77c89-z56qw\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.922673 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.926295 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 07:38:22 crc kubenswrapper[4858]: I1122 07:38:22.949208 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-tp2jz"] Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.023557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/352f31b6-c8d5-4178-82b5-f36d2d341431-config\") pod \"dnsmasq-dns-7bdd77c89-z56qw\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.023646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-dns-svc\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.023703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-config\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.023733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5jpx\" (UniqueName: \"kubernetes.io/projected/352f31b6-c8d5-4178-82b5-f36d2d341431-kube-api-access-n5jpx\") pod \"dnsmasq-dns-7bdd77c89-z56qw\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.023789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75d2\" (UniqueName: \"kubernetes.io/projected/4f875b81-0a31-4e91-a45b-f4a6ba519976-kube-api-access-r75d2\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.024747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/352f31b6-c8d5-4178-82b5-f36d2d341431-config\") pod \"dnsmasq-dns-7bdd77c89-z56qw\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.047157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5jpx\" (UniqueName: \"kubernetes.io/projected/352f31b6-c8d5-4178-82b5-f36d2d341431-kube-api-access-n5jpx\") pod \"dnsmasq-dns-7bdd77c89-z56qw\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.124995 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75d2\" (UniqueName: \"kubernetes.io/projected/4f875b81-0a31-4e91-a45b-f4a6ba519976-kube-api-access-r75d2\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.125091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-dns-svc\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.125128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-config\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.126057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-config\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.126273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-dns-svc\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.145610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75d2\" (UniqueName: \"kubernetes.io/projected/4f875b81-0a31-4e91-a45b-f4a6ba519976-kube-api-access-r75d2\") pod \"dnsmasq-dns-6584b49599-tp2jz\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.162410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.241484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.636125 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-z56qw"] Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.742251 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-tp2jz"] Nov 22 07:38:23 crc kubenswrapper[4858]: W1122 07:38:23.747567 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f875b81_0a31_4e91_a45b_f4a6ba519976.slice/crio-6c018254c2d0e3ca2b15ae5cacbd9a7d3e0a7e168c82aa545c56af4e8c4b9650 WatchSource:0}: Error finding container 6c018254c2d0e3ca2b15ae5cacbd9a7d3e0a7e168c82aa545c56af4e8c4b9650: Status 404 returned error can't find the container with id 6c018254c2d0e3ca2b15ae5cacbd9a7d3e0a7e168c82aa545c56af4e8c4b9650 Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.820772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" event={"ID":"4f875b81-0a31-4e91-a45b-f4a6ba519976","Type":"ContainerStarted","Data":"6c018254c2d0e3ca2b15ae5cacbd9a7d3e0a7e168c82aa545c56af4e8c4b9650"} Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.823594 4858 generic.go:334] "Generic (PLEG): container finished" podID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerID="76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a" exitCode=0 Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.823687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6pt7" event={"ID":"a0640fee-cce7-4095-b941-fefa6fd90c76","Type":"ContainerDied","Data":"76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a"} Nov 22 07:38:23 crc kubenswrapper[4858]: I1122 07:38:23.825274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" event={"ID":"352f31b6-c8d5-4178-82b5-f36d2d341431","Type":"ContainerStarted","Data":"ac0900d0fd7b2861b3e0090909e1ae0506d746232cdec648c261d5cb47342837"} Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.247923 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwksg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.727840 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-tp2jz"] Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.761800 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-h4fxg"] Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.768704 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.784611 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-h4fxg"] Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.876591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.876666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fhh\" (UniqueName: \"kubernetes.io/projected/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-kube-api-access-t8fhh\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.876716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-config\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.977927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.978019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8fhh\" (UniqueName: \"kubernetes.io/projected/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-kube-api-access-t8fhh\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.978095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-config\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.979420 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-config\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:25 crc kubenswrapper[4858]: I1122 07:38:25.980964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.002810 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8fhh\" (UniqueName: \"kubernetes.io/projected/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-kube-api-access-t8fhh\") pod \"dnsmasq-dns-7c6d9948dc-h4fxg\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.100709 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.115089 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwksg"] Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.164489 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-z56qw"] Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.201179 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-97xgk"] Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.205173 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.215216 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-97xgk"] Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.282518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-config\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.282572 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vq7j\" (UniqueName: \"kubernetes.io/projected/65c972e4-1af3-48ac-af6c-2e65080ed8b5-kube-api-access-7vq7j\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.282592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-dns-svc\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.385550 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-config\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.385634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-dns-svc\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.385658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vq7j\" (UniqueName: \"kubernetes.io/projected/65c972e4-1af3-48ac-af6c-2e65080ed8b5-kube-api-access-7vq7j\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.387599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-config\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.388285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-dns-svc\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.428611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vq7j\" (UniqueName: \"kubernetes.io/projected/65c972e4-1af3-48ac-af6c-2e65080ed8b5-kube-api-access-7vq7j\") pod \"dnsmasq-dns-6486446b9f-97xgk\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.662056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.844448 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqsnt"] Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.844750 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cqsnt" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="registry-server" containerID="cri-o://efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49" gracePeriod=2 Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.887103 4858 generic.go:334] "Generic (PLEG): container finished" podID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerID="698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a" exitCode=0 Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.887430 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6pt7" event={"ID":"a0640fee-cce7-4095-b941-fefa6fd90c76","Type":"ContainerDied","Data":"698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a"} Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.941053 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.943459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.949407 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.949486 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.950273 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.951898 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.952176 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.952360 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wn2gl" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.952519 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 07:38:26 crc kubenswrapper[4858]: I1122 07:38:26.989607 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108464 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb1a203-c5d9-4ba5-b31b-c6134963af46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108550 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2bx7\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-kube-api-access-w2bx7\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.108688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb1a203-c5d9-4ba5-b31b-c6134963af46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.124705 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-h4fxg"] Nov 22 07:38:27 crc kubenswrapper[4858]: W1122 07:38:27.140995 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f29dbfc_f3aa_465d_959a_5d1ed2daf1bf.slice/crio-9d110f19a70aff77148db4e88aba3029d8a10e4d0649ad6ae538103deb01dace WatchSource:0}: Error finding container 9d110f19a70aff77148db4e88aba3029d8a10e4d0649ad6ae538103deb01dace: Status 404 returned error can't find the container with id 9d110f19a70aff77148db4e88aba3029d8a10e4d0649ad6ae538103deb01dace Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.209928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.209998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210130 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb1a203-c5d9-4ba5-b31b-c6134963af46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210184 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2bx7\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-kube-api-access-w2bx7\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.210355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb1a203-c5d9-4ba5-b31b-c6134963af46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.211720 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.211926 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.214410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.214510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.214955 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.215228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.219992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.236986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb1a203-c5d9-4ba5-b31b-c6134963af46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.237051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.244706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb1a203-c5d9-4ba5-b31b-c6134963af46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.246109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.247713 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2bx7\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-kube-api-access-w2bx7\") pod \"rabbitmq-server-0\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.295362 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.309235 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-97xgk"] Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.319179 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.321638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.331106 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.332147 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.333001 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.333237 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.333372 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2ptzn" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.333529 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.333752 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.334747 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.416820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.416890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.416923 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.416959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.416992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.417025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.417073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.417098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.417132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.417169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwfz\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-kube-api-access-mvwfz\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.417200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.523591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.526971 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.527057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwfz\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-kube-api-access-mvwfz\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.527137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.527175 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.527227 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.527576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.528224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.529388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.529868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.533347 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.533384 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.535166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.537009 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:38:27 crc kubenswrapper[4858]: E1122 07:38:27.537229 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.547980 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.563728 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.589551 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwfz\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-kube-api-access-mvwfz\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.628373 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnnwk\" (UniqueName: \"kubernetes.io/projected/2ed98ca3-48f8-4737-8954-fac4bea34ad1-kube-api-access-qnnwk\") pod \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.628637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-catalog-content\") pod \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.628699 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-utilities\") pod \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\" (UID: \"2ed98ca3-48f8-4737-8954-fac4bea34ad1\") " Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.631189 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-utilities" (OuterVolumeSpecName: "utilities") pod "2ed98ca3-48f8-4737-8954-fac4bea34ad1" (UID: "2ed98ca3-48f8-4737-8954-fac4bea34ad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.634512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.681040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed98ca3-48f8-4737-8954-fac4bea34ad1-kube-api-access-qnnwk" (OuterVolumeSpecName: "kube-api-access-qnnwk") pod "2ed98ca3-48f8-4737-8954-fac4bea34ad1" (UID: "2ed98ca3-48f8-4737-8954-fac4bea34ad1"). InnerVolumeSpecName "kube-api-access-qnnwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.693029 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.731128 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.731466 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnnwk\" (UniqueName: \"kubernetes.io/projected/2ed98ca3-48f8-4737-8954-fac4bea34ad1-kube-api-access-qnnwk\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.764651 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ed98ca3-48f8-4737-8954-fac4bea34ad1" (UID: "2ed98ca3-48f8-4737-8954-fac4bea34ad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.833939 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed98ca3-48f8-4737-8954-fac4bea34ad1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.908456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" event={"ID":"65c972e4-1af3-48ac-af6c-2e65080ed8b5","Type":"ContainerStarted","Data":"86e99f913047b527304e9ed7a33dc9598e04c2101aee6eda483b82154f30b9e9"} Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.912620 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerID="efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49" exitCode=0 Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.912728 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqsnt" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.912768 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqsnt" event={"ID":"2ed98ca3-48f8-4737-8954-fac4bea34ad1","Type":"ContainerDied","Data":"efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49"} Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.912854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqsnt" event={"ID":"2ed98ca3-48f8-4737-8954-fac4bea34ad1","Type":"ContainerDied","Data":"3c6b6ff745e19a83036f437b3df27497ae291294f3fa432686eba6722d9ce8d8"} Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.912880 4858 scope.go:117] "RemoveContainer" containerID="efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49" Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.917095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" event={"ID":"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf","Type":"ContainerStarted","Data":"9d110f19a70aff77148db4e88aba3029d8a10e4d0649ad6ae538103deb01dace"} Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.954670 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqsnt"] Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.978917 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cqsnt"] Nov 22 07:38:27 crc kubenswrapper[4858]: I1122 07:38:27.991987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.076423 4858 scope.go:117] "RemoveContainer" containerID="841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.268534 4858 scope.go:117] "RemoveContainer" containerID="e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.326974 4858 scope.go:117] "RemoveContainer" containerID="efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49" Nov 22 07:38:28 crc kubenswrapper[4858]: E1122 07:38:28.329068 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49\": container with ID starting with efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49 not found: ID does not exist" containerID="efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.329160 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49"} err="failed to get container status \"efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49\": rpc error: code = NotFound desc = could not find container \"efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49\": container with ID starting with efad9515a25d8c7c56190a99d9d2ca52f9b0b7ecd93ee1beb04241a4bc4bcd49 not found: ID does not exist" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.329234 4858 scope.go:117] "RemoveContainer" containerID="841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835" Nov 22 07:38:28 crc kubenswrapper[4858]: E1122 07:38:28.330016 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835\": container with ID starting with 841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835 not found: ID does not exist" containerID="841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.330047 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835"} err="failed to get container status \"841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835\": rpc error: code = NotFound desc = could not find container \"841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835\": container with ID starting with 841fd22d653d165a10bc0fa726524f52ed58d4d583098cd6354c9e5d45334835 not found: ID does not exist" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.330124 4858 scope.go:117] "RemoveContainer" containerID="e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad" Nov 22 07:38:28 crc kubenswrapper[4858]: E1122 07:38:28.331311 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad\": container with ID starting with e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad not found: ID does not exist" containerID="e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.331376 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad"} err="failed to get container status \"e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad\": rpc error: code = NotFound desc = could not find container \"e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad\": container with ID starting with e7e20050400742317a1676acb12a28bcd76010bb6c71a2cdf53ac692307d03ad not found: ID does not exist" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.617177 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:38:28 crc kubenswrapper[4858]: E1122 07:38:28.618043 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="extract-utilities" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.618063 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="extract-utilities" Nov 22 07:38:28 crc kubenswrapper[4858]: E1122 07:38:28.618085 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="extract-content" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.618093 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="extract-content" Nov 22 07:38:28 crc kubenswrapper[4858]: E1122 07:38:28.618113 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="registry-server" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.618122 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="registry-server" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.618491 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" containerName="registry-server" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.621727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.631228 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.631827 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.633007 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zfz58" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.633263 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.651662 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.656149 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751735 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnlz\" (UniqueName: \"kubernetes.io/projected/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kube-api-access-lqnlz\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kolla-config\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751897 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.751986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.752023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-default\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.811142 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:38:28 crc kubenswrapper[4858]: W1122 07:38:28.821385 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a92d321_46e4_4291_8ac3_fc8f039b3dcf.slice/crio-3dc2f74303a7ee7a0106c8ec299b4e1546de0a1dc8f162fb3675896f04439d91 WatchSource:0}: Error finding container 3dc2f74303a7ee7a0106c8ec299b4e1546de0a1dc8f162fb3675896f04439d91: Status 404 returned error can't find the container with id 3dc2f74303a7ee7a0106c8ec299b4e1546de0a1dc8f162fb3675896f04439d91 Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-default\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855224 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqnlz\" (UniqueName: \"kubernetes.io/projected/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kube-api-access-lqnlz\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kolla-config\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855305 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855350 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.855373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.856993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-default\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.857200 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.859116 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kolla-config\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.861010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.864148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.865083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.883643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.892526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqnlz\" (UniqueName: \"kubernetes.io/projected/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kube-api-access-lqnlz\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.923177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.946359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a92d321-46e4-4291-8ac3-fc8f039b3dcf","Type":"ContainerStarted","Data":"3dc2f74303a7ee7a0106c8ec299b4e1546de0a1dc8f162fb3675896f04439d91"} Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.956924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb1a203-c5d9-4ba5-b31b-c6134963af46","Type":"ContainerStarted","Data":"6cfb57607d2c3f225692b0d2f9d43db8bd774cf8c6c30d64695e74df969988a4"} Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.966914 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:38:28 crc kubenswrapper[4858]: I1122 07:38:28.977307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6pt7" event={"ID":"a0640fee-cce7-4095-b941-fefa6fd90c76","Type":"ContainerStarted","Data":"ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3"} Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.560942 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed98ca3-48f8-4737-8954-fac4bea34ad1" path="/var/lib/kubelet/pods/2ed98ca3-48f8-4737-8954-fac4bea34ad1/volumes" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.613769 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6pt7" podStartSLOduration=3.255976163 podStartE2EDuration="7.613749151s" podCreationTimestamp="2025-11-22 07:38:22 +0000 UTC" firstStartedPulling="2025-11-22 07:38:23.825392079 +0000 UTC m=+1665.666815085" lastFinishedPulling="2025-11-22 07:38:28.183165067 +0000 UTC m=+1670.024588073" observedRunningTime="2025-11-22 07:38:29.029408765 +0000 UTC m=+1670.870831801" watchObservedRunningTime="2025-11-22 07:38:29.613749151 +0000 UTC m=+1671.455172157" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.715988 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.932440 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.934401 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.939513 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.939610 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jc29p" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.940683 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.946734 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.955914 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.990798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.990856 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjmm\" (UniqueName: \"kubernetes.io/projected/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kube-api-access-8fjmm\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.990929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.991046 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.991110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.991229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.991371 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:29 crc kubenswrapper[4858]: I1122 07:38:29.991494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.028682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4ec286aa-6594-4e36-b307-c8ffaa0e59de","Type":"ContainerStarted","Data":"a7a8a85beea11de66210f7a2b3bd0111d85ac468dafbb4ef22ffe263e58d9928"} Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fjmm\" (UniqueName: \"kubernetes.io/projected/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kube-api-access-8fjmm\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095463 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.095724 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.097074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.100631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.100902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.101125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.117433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.193164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fjmm\" (UniqueName: \"kubernetes.io/projected/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kube-api-access-8fjmm\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.193204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.257355 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.284597 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.508389 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.509786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.534912 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.535163 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mmj4k" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.535390 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.545940 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.667431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.667880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.668107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kolla-config\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.668137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-config-data\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.668243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg75d\" (UniqueName: \"kubernetes.io/projected/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kube-api-access-rg75d\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.769835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kolla-config\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.769912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-config-data\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.769970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg75d\" (UniqueName: \"kubernetes.io/projected/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kube-api-access-rg75d\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.770025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.770055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.771133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kolla-config\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.772118 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-config-data\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.782110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.782709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.849597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg75d\" (UniqueName: \"kubernetes.io/projected/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kube-api-access-rg75d\") pod \"memcached-0\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " pod="openstack/memcached-0" Nov 22 07:38:30 crc kubenswrapper[4858]: I1122 07:38:30.858296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:38:31 crc kubenswrapper[4858]: I1122 07:38:31.809726 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:38:31 crc kubenswrapper[4858]: I1122 07:38:31.917460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 07:38:32 crc kubenswrapper[4858]: W1122 07:38:32.009415 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92662c9_980a_41b0_ad01_bbb1cdaf864b.slice/crio-d46dbb82e39d7ea4d4c2a65b8c2afa9942910ab0e80bfa4c74ac1e9439907fba WatchSource:0}: Error finding container d46dbb82e39d7ea4d4c2a65b8c2afa9942910ab0e80bfa4c74ac1e9439907fba: Status 404 returned error can't find the container with id d46dbb82e39d7ea4d4c2a65b8c2afa9942910ab0e80bfa4c74ac1e9439907fba Nov 22 07:38:32 crc kubenswrapper[4858]: W1122 07:38:32.020723 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9906e22d_4a3b_4ab7_86b7_2944b6af0f34.slice/crio-048787de93f736495867c46134edc0e72cf0883b9f58fcdbe2d0237088e3b6e4 WatchSource:0}: Error finding container 048787de93f736495867c46134edc0e72cf0883b9f58fcdbe2d0237088e3b6e4: Status 404 returned error can't find the container with id 048787de93f736495867c46134edc0e72cf0883b9f58fcdbe2d0237088e3b6e4 Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.172831 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9906e22d-4a3b-4ab7-86b7-2944b6af0f34","Type":"ContainerStarted","Data":"048787de93f736495867c46134edc0e72cf0883b9f58fcdbe2d0237088e3b6e4"} Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.177884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d92662c9-980a-41b0-ad01-bbb1cdaf864b","Type":"ContainerStarted","Data":"d46dbb82e39d7ea4d4c2a65b8c2afa9942910ab0e80bfa4c74ac1e9439907fba"} Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.375542 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.376800 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.789741 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.791386 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.802510 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-klb2h" Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.818756 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.882843 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:32 crc kubenswrapper[4858]: I1122 07:38:32.942719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhz6q\" (UniqueName: \"kubernetes.io/projected/0e1c8353-0669-44c9-840f-d1e30d3b51eb-kube-api-access-bhz6q\") pod \"kube-state-metrics-0\" (UID: \"0e1c8353-0669-44c9-840f-d1e30d3b51eb\") " pod="openstack/kube-state-metrics-0" Nov 22 07:38:33 crc kubenswrapper[4858]: I1122 07:38:33.048341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz6q\" (UniqueName: \"kubernetes.io/projected/0e1c8353-0669-44c9-840f-d1e30d3b51eb-kube-api-access-bhz6q\") pod \"kube-state-metrics-0\" (UID: \"0e1c8353-0669-44c9-840f-d1e30d3b51eb\") " pod="openstack/kube-state-metrics-0" Nov 22 07:38:33 crc kubenswrapper[4858]: I1122 07:38:33.089933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz6q\" (UniqueName: \"kubernetes.io/projected/0e1c8353-0669-44c9-840f-d1e30d3b51eb-kube-api-access-bhz6q\") pod \"kube-state-metrics-0\" (UID: \"0e1c8353-0669-44c9-840f-d1e30d3b51eb\") " pod="openstack/kube-state-metrics-0" Nov 22 07:38:33 crc kubenswrapper[4858]: I1122 07:38:33.118496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:38:33 crc kubenswrapper[4858]: I1122 07:38:33.384419 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:34 crc kubenswrapper[4858]: I1122 07:38:34.872574 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6pt7"] Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.332725 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.948599 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.949956 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.955113 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.955425 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.955558 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.955664 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wbv9g" Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.959468 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 07:38:35 crc kubenswrapper[4858]: I1122 07:38:35.976202 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071504 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9crv\" (UniqueName: \"kubernetes.io/projected/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-kube-api-access-c9crv\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-config\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.071895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9crv\" (UniqueName: \"kubernetes.io/projected/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-kube-api-access-c9crv\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-config\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178958 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.178977 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.180222 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.181943 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.182443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.188311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.191530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-config\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.205097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.206825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.211774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9crv\" (UniqueName: \"kubernetes.io/projected/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-kube-api-access-c9crv\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.214294 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.280654 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.328387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e1c8353-0669-44c9-840f-d1e30d3b51eb","Type":"ContainerStarted","Data":"d224b32ee12a3e577f7e4c0e2beda356aa76615e226a3fda5ec3893b43bc1c99"} Nov 22 07:38:36 crc kubenswrapper[4858]: I1122 07:38:36.328670 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6pt7" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="registry-server" containerID="cri-o://ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3" gracePeriod=2 Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.304230 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.361935 4858 generic.go:334] "Generic (PLEG): container finished" podID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerID="ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3" exitCode=0 Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.361993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6pt7" event={"ID":"a0640fee-cce7-4095-b941-fefa6fd90c76","Type":"ContainerDied","Data":"ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3"} Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.362027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6pt7" event={"ID":"a0640fee-cce7-4095-b941-fefa6fd90c76","Type":"ContainerDied","Data":"5ed57f7ced2995ebdfb63a1c560c2e2785c08c16ddf0e31146bf8cd5bfc48beb"} Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.362044 4858 scope.go:117] "RemoveContainer" containerID="ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.362229 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6pt7" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.422934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-catalog-content\") pod \"a0640fee-cce7-4095-b941-fefa6fd90c76\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.423069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxtj8\" (UniqueName: \"kubernetes.io/projected/a0640fee-cce7-4095-b941-fefa6fd90c76-kube-api-access-xxtj8\") pod \"a0640fee-cce7-4095-b941-fefa6fd90c76\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.423205 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-utilities\") pod \"a0640fee-cce7-4095-b941-fefa6fd90c76\" (UID: \"a0640fee-cce7-4095-b941-fefa6fd90c76\") " Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.425027 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-utilities" (OuterVolumeSpecName: "utilities") pod "a0640fee-cce7-4095-b941-fefa6fd90c76" (UID: "a0640fee-cce7-4095-b941-fefa6fd90c76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.442406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0640fee-cce7-4095-b941-fefa6fd90c76-kube-api-access-xxtj8" (OuterVolumeSpecName: "kube-api-access-xxtj8") pod "a0640fee-cce7-4095-b941-fefa6fd90c76" (UID: "a0640fee-cce7-4095-b941-fefa6fd90c76"). InnerVolumeSpecName "kube-api-access-xxtj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.454253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.456137 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0640fee-cce7-4095-b941-fefa6fd90c76" (UID: "a0640fee-cce7-4095-b941-fefa6fd90c76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.476807 4858 scope.go:117] "RemoveContainer" containerID="698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a" Nov 22 07:38:37 crc kubenswrapper[4858]: W1122 07:38:37.494612 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14fe3fb0_c1b4_4ca6_9d41_ca400a479fe6.slice/crio-fe2580fb65f0703d75f266fef0d8976f9549d16faec0950678a5220120085269 WatchSource:0}: Error finding container fe2580fb65f0703d75f266fef0d8976f9549d16faec0950678a5220120085269: Status 404 returned error can't find the container with id fe2580fb65f0703d75f266fef0d8976f9549d16faec0950678a5220120085269 Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.526142 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.526226 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxtj8\" (UniqueName: \"kubernetes.io/projected/a0640fee-cce7-4095-b941-fefa6fd90c76-kube-api-access-xxtj8\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.526279 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0640fee-cce7-4095-b941-fefa6fd90c76-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.527490 4858 scope.go:117] "RemoveContainer" containerID="76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.649005 4858 scope.go:117] "RemoveContainer" containerID="ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3" Nov 22 07:38:37 crc kubenswrapper[4858]: E1122 07:38:37.650121 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3\": container with ID starting with ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3 not found: ID does not exist" containerID="ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.650233 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3"} err="failed to get container status \"ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3\": rpc error: code = NotFound desc = could not find container \"ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3\": container with ID starting with ae09923fa512c243c3d96a5858d90fb8a4237269f79da82058e70ed714a8b5d3 not found: ID does not exist" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.650382 4858 scope.go:117] "RemoveContainer" containerID="698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a" Nov 22 07:38:37 crc kubenswrapper[4858]: E1122 07:38:37.651560 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a\": container with ID starting with 698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a not found: ID does not exist" containerID="698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.651610 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a"} err="failed to get container status \"698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a\": rpc error: code = NotFound desc = could not find container \"698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a\": container with ID starting with 698269c67a4a308b4b7bdb397d520c03e1b8e3621bf10edb3f186faba333559a not found: ID does not exist" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.651646 4858 scope.go:117] "RemoveContainer" containerID="76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a" Nov 22 07:38:37 crc kubenswrapper[4858]: E1122 07:38:37.652427 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a\": container with ID starting with 76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a not found: ID does not exist" containerID="76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.652464 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a"} err="failed to get container status \"76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a\": rpc error: code = NotFound desc = could not find container \"76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a\": container with ID starting with 76e7513b5c21c37fa973ce6390f66be5585a8b9c4816a91810141bc820b8a93a not found: ID does not exist" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.694588 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6pt7"] Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.710128 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6pt7"] Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.993037 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rm92c"] Nov 22 07:38:37 crc kubenswrapper[4858]: E1122 07:38:37.994006 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="extract-utilities" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.994040 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="extract-utilities" Nov 22 07:38:37 crc kubenswrapper[4858]: E1122 07:38:37.994074 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="extract-content" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.994088 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="extract-content" Nov 22 07:38:37 crc kubenswrapper[4858]: E1122 07:38:37.994144 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="registry-server" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.994160 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="registry-server" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.994559 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" containerName="registry-server" Nov 22 07:38:37 crc kubenswrapper[4858]: I1122 07:38:37.995553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.001178 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.001521 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x4gwl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.004281 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.043397 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xbvdl"] Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.046042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.090893 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-h4fxg"] Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.097671 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xbvdl"] Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.106812 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rm92c"] Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.117529 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-vhh69"] Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.119188 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.122300 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.129868 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-vhh69"] Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140664 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmwr\" (UniqueName: \"kubernetes.io/projected/4636a7e4-bda9-4b76-91ab-87ed6e121b50-kube-api-access-2dmwr\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-scripts\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-run\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4955v\" (UniqueName: \"kubernetes.io/projected/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-kube-api-access-4955v\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-ovn-controller-tls-certs\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-log-ovn\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.140957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-combined-ca-bundle\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.141116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.142138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-etc-ovs\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.142252 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-log\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.142292 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-lib\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.142410 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run-ovn\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.142614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4636a7e4-bda9-4b76-91ab-87ed6e121b50-scripts\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-log\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249469 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-lib\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run-ovn\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kt4\" (UniqueName: \"kubernetes.io/projected/ce63b917-9d99-4ff8-936a-b2e3dff67a94-kube-api-access-w5kt4\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4636a7e4-bda9-4b76-91ab-87ed6e121b50-scripts\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-config\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-ovn-controller-tls-certs\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-combined-ca-bundle\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmwr\" (UniqueName: \"kubernetes.io/projected/4636a7e4-bda9-4b76-91ab-87ed6e121b50-kube-api-access-2dmwr\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-scripts\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-run\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4955v\" (UniqueName: \"kubernetes.io/projected/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-kube-api-access-4955v\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-log-ovn\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.249946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-etc-ovs\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.250174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run-ovn\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.250373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-log\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.250490 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-lib\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.251307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-etc-ovs\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.256311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-scripts\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.256420 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4636a7e4-bda9-4b76-91ab-87ed6e121b50-scripts\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.257479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.257821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-run\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.257952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-log-ovn\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.266033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-ovn-controller-tls-certs\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.273883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmwr\" (UniqueName: \"kubernetes.io/projected/4636a7e4-bda9-4b76-91ab-87ed6e121b50-kube-api-access-2dmwr\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.285759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4955v\" (UniqueName: \"kubernetes.io/projected/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-kube-api-access-4955v\") pod \"ovn-controller-ovs-xbvdl\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.287357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-combined-ca-bundle\") pod \"ovn-controller-rm92c\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.349663 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.351185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.351261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.351356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kt4\" (UniqueName: \"kubernetes.io/projected/ce63b917-9d99-4ff8-936a-b2e3dff67a94-kube-api-access-w5kt4\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.351395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-config\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.352961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-config\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.353577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.354152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.376160 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kt4\" (UniqueName: \"kubernetes.io/projected/ce63b917-9d99-4ff8-936a-b2e3dff67a94-kube-api-access-w5kt4\") pod \"dnsmasq-dns-6c65c5f57f-vhh69\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.384781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.459586 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:38:38 crc kubenswrapper[4858]: I1122 07:38:38.463222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6","Type":"ContainerStarted","Data":"fe2580fb65f0703d75f266fef0d8976f9549d16faec0950678a5220120085269"} Nov 22 07:38:39 crc kubenswrapper[4858]: I1122 07:38:39.563252 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0640fee-cce7-4095-b941-fefa6fd90c76" path="/var/lib/kubelet/pods/a0640fee-cce7-4095-b941-fefa6fd90c76/volumes" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.085704 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.087345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.091615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-cprws" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.091630 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.091651 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.091878 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.098705 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211590 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-config\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9hk\" (UniqueName: \"kubernetes.io/projected/8d445612-f1b5-47d6-b247-398725d6fe54-kube-api-access-ck9hk\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.211991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.212040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.313997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-config\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9hk\" (UniqueName: \"kubernetes.io/projected/8d445612-f1b5-47d6-b247-398725d6fe54-kube-api-access-ck9hk\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314179 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.314305 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.317125 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.321863 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.322288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-config\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.324563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.325310 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.343597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.350516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9hk\" (UniqueName: \"kubernetes.io/projected/8d445612-f1b5-47d6-b247-398725d6fe54-kube-api-access-ck9hk\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.353903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.354295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:40 crc kubenswrapper[4858]: I1122 07:38:40.427781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:38:41 crc kubenswrapper[4858]: I1122 07:38:41.536601 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:38:41 crc kubenswrapper[4858]: E1122 07:38:41.537721 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:38:52 crc kubenswrapper[4858]: I1122 07:38:52.536934 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:38:52 crc kubenswrapper[4858]: E1122 07:38:52.537708 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:39:03 crc kubenswrapper[4858]: I1122 07:39:03.536070 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:39:03 crc kubenswrapper[4858]: E1122 07:39:03.537003 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:39:04 crc kubenswrapper[4858]: E1122 07:39:04.851438 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:41eb7c54c0a4a4afdc79659db2c38ffe7418be430e159b33eadc2eae6758868a: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-memcached/blobs/sha256:41eb7c54c0a4a4afdc79659db2c38ffe7418be430e159b33eadc2eae6758868a\": context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:36a0fb31978aee0ded2483de311631e64a644d0b0685b5b055f65ede7eb8e8a2" Nov 22 07:39:04 crc kubenswrapper[4858]: E1122 07:39:04.852133 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:36a0fb31978aee0ded2483de311631e64a644d0b0685b5b055f65ede7eb8e8a2,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n55h558h7ch648h58h554hbdh675h65h567h694h575h5cch5f4hd5h67h57fh5ddh7bh669h65fh5d5hf5h5f4h5b9h595hd5h9h68bh698h696h5fcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg75d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(9906e22d-4a3b-4ab7-86b7-2944b6af0f34): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:41eb7c54c0a4a4afdc79659db2c38ffe7418be430e159b33eadc2eae6758868a: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-memcached/blobs/sha256:41eb7c54c0a4a4afdc79659db2c38ffe7418be430e159b33eadc2eae6758868a\": context canceled" logger="UnhandledError" Nov 22 07:39:04 crc kubenswrapper[4858]: E1122 07:39:04.853358 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:41eb7c54c0a4a4afdc79659db2c38ffe7418be430e159b33eadc2eae6758868a: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-memcached/blobs/sha256:41eb7c54c0a4a4afdc79659db2c38ffe7418be430e159b33eadc2eae6758868a\\\": context canceled\"" pod="openstack/memcached-0" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" Nov 22 07:39:05 crc kubenswrapper[4858]: E1122 07:39:05.715910 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:36a0fb31978aee0ded2483de311631e64a644d0b0685b5b055f65ede7eb8e8a2\\\"\"" pod="openstack/memcached-0" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" Nov 22 07:39:06 crc kubenswrapper[4858]: E1122 07:39:06.015610 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b" Nov 22 07:39:06 crc kubenswrapper[4858]: E1122 07:39:06.016040 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvwfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(2a92d321-46e4-4291-8ac3-fc8f039b3dcf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:06 crc kubenswrapper[4858]: E1122 07:39:06.017344 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" Nov 22 07:39:06 crc kubenswrapper[4858]: E1122 07:39:06.724459 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" Nov 22 07:39:14 crc kubenswrapper[4858]: E1122 07:39:14.644981 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b" Nov 22 07:39:14 crc kubenswrapper[4858]: E1122 07:39:14.645685 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2bx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(ddb1a203-c5d9-4ba5-b31b-c6134963af46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:14 crc kubenswrapper[4858]: E1122 07:39:14.647297 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" Nov 22 07:39:14 crc kubenswrapper[4858]: E1122 07:39:14.783931 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b\\\"\"" pod="openstack/rabbitmq-server-0" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" Nov 22 07:39:15 crc kubenswrapper[4858]: E1122 07:39:15.935686 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce" Nov 22 07:39:15 crc kubenswrapper[4858]: E1122 07:39:15.935914 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fjmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d92662c9-980a-41b0-ad01-bbb1cdaf864b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:15 crc kubenswrapper[4858]: E1122 07:39:15.937304 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" Nov 22 07:39:15 crc kubenswrapper[4858]: E1122 07:39:15.955641 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce" Nov 22 07:39:15 crc kubenswrapper[4858]: E1122 07:39:15.955862 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqnlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(4ec286aa-6594-4e36-b307-c8ffaa0e59de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:15 crc kubenswrapper[4858]: E1122 07:39:15.957030 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" Nov 22 07:39:16 crc kubenswrapper[4858]: I1122 07:39:16.726541 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:39:16 crc kubenswrapper[4858]: E1122 07:39:16.800978 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" Nov 22 07:39:16 crc kubenswrapper[4858]: E1122 07:39:16.801042 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce\\\"\"" pod="openstack/openstack-galera-0" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" Nov 22 07:39:17 crc kubenswrapper[4858]: I1122 07:39:17.538443 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:39:17 crc kubenswrapper[4858]: E1122 07:39:17.540365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:39:17 crc kubenswrapper[4858]: I1122 07:39:17.710645 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rm92c"] Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.406183 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.406773 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5jpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-z56qw_openstack(352f31b6-c8d5-4178-82b5-f36d2d341431): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.407995 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" podUID="352f31b6-c8d5-4178-82b5-f36d2d341431" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.445754 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.446020 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vq7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6486446b9f-97xgk_openstack(65c972e4-1af3-48ac-af6c-2e65080ed8b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.447313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" podUID="65c972e4-1af3-48ac-af6c-2e65080ed8b5" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.466389 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.466617 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r75d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-tp2jz_openstack(4f875b81-0a31-4e91-a45b-f4a6ba519976): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.470449 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" podUID="4f875b81-0a31-4e91-a45b-f4a6ba519976" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.507571 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.507793 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8fhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c6d9948dc-h4fxg_openstack(0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.510501 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" podUID="0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf" Nov 22 07:39:18 crc kubenswrapper[4858]: W1122 07:39:18.704097 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4636a7e4_bda9_4b76_91ab_87ed6e121b50.slice/crio-76540b03a4667f542bf36ad5cfa34e28814f1c6a239d85a470c6073986783595 WatchSource:0}: Error finding container 76540b03a4667f542bf36ad5cfa34e28814f1c6a239d85a470c6073986783595: Status 404 returned error can't find the container with id 76540b03a4667f542bf36ad5cfa34e28814f1c6a239d85a470c6073986783595 Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.709015 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf" Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.709225 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cbhbfh647h59fh654h5d7h674h585h65chd6h59dhfch7fh689h596h55fh68fh54bh89h57bh5dfh6fh5b4h8hf9h5c5h546h9bh5ffh546hf7h7dq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9crv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:18 crc kubenswrapper[4858]: I1122 07:39:18.794617 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-vhh69"] Nov 22 07:39:18 crc kubenswrapper[4858]: I1122 07:39:18.819783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8d445612-f1b5-47d6-b247-398725d6fe54","Type":"ContainerStarted","Data":"9f93c99ee45d764031d887fe389f340c77d22607f36c1565052b6e3f993c17f6"} Nov 22 07:39:18 crc kubenswrapper[4858]: I1122 07:39:18.828113 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c" event={"ID":"4636a7e4-bda9-4b76-91ab-87ed6e121b50","Type":"ContainerStarted","Data":"76540b03a4667f542bf36ad5cfa34e28814f1c6a239d85a470c6073986783595"} Nov 22 07:39:18 crc kubenswrapper[4858]: E1122 07:39:18.833444 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" podUID="65c972e4-1af3-48ac-af6c-2e65080ed8b5" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.290974 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xbvdl"] Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.585900 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.644055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5jpx\" (UniqueName: \"kubernetes.io/projected/352f31b6-c8d5-4178-82b5-f36d2d341431-kube-api-access-n5jpx\") pod \"352f31b6-c8d5-4178-82b5-f36d2d341431\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.644279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/352f31b6-c8d5-4178-82b5-f36d2d341431-config\") pod \"352f31b6-c8d5-4178-82b5-f36d2d341431\" (UID: \"352f31b6-c8d5-4178-82b5-f36d2d341431\") " Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.645776 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/352f31b6-c8d5-4178-82b5-f36d2d341431-config" (OuterVolumeSpecName: "config") pod "352f31b6-c8d5-4178-82b5-f36d2d341431" (UID: "352f31b6-c8d5-4178-82b5-f36d2d341431"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.651453 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352f31b6-c8d5-4178-82b5-f36d2d341431-kube-api-access-n5jpx" (OuterVolumeSpecName: "kube-api-access-n5jpx") pod "352f31b6-c8d5-4178-82b5-f36d2d341431" (UID: "352f31b6-c8d5-4178-82b5-f36d2d341431"). InnerVolumeSpecName "kube-api-access-n5jpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.747290 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5jpx\" (UniqueName: \"kubernetes.io/projected/352f31b6-c8d5-4178-82b5-f36d2d341431-kube-api-access-n5jpx\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.747358 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/352f31b6-c8d5-4178-82b5-f36d2d341431-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.838007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" event={"ID":"352f31b6-c8d5-4178-82b5-f36d2d341431","Type":"ContainerDied","Data":"ac0900d0fd7b2861b3e0090909e1ae0506d746232cdec648c261d5cb47342837"} Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.838119 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-z56qw" Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.915186 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-z56qw"] Nov 22 07:39:19 crc kubenswrapper[4858]: I1122 07:39:19.924079 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-z56qw"] Nov 22 07:39:20 crc kubenswrapper[4858]: W1122 07:39:20.141254 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce63b917_9d99_4ff8_936a_b2e3dff67a94.slice/crio-194d024c29ed04149c5d0540a9be51637e3a23cae259ab95e585a67fdae85cc8 WatchSource:0}: Error finding container 194d024c29ed04149c5d0540a9be51637e3a23cae259ab95e585a67fdae85cc8: Status 404 returned error can't find the container with id 194d024c29ed04149c5d0540a9be51637e3a23cae259ab95e585a67fdae85cc8 Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.219538 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.225293 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.268045 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-config\") pod \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.268137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-config\") pod \"4f875b81-0a31-4e91-a45b-f4a6ba519976\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.268201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-dns-svc\") pod \"4f875b81-0a31-4e91-a45b-f4a6ba519976\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.268262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8fhh\" (UniqueName: \"kubernetes.io/projected/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-kube-api-access-t8fhh\") pod \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.268338 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r75d2\" (UniqueName: \"kubernetes.io/projected/4f875b81-0a31-4e91-a45b-f4a6ba519976-kube-api-access-r75d2\") pod \"4f875b81-0a31-4e91-a45b-f4a6ba519976\" (UID: \"4f875b81-0a31-4e91-a45b-f4a6ba519976\") " Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.268385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-dns-svc\") pod \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\" (UID: \"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf\") " Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.269335 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-config" (OuterVolumeSpecName: "config") pod "0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf" (UID: "0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.270140 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-config" (OuterVolumeSpecName: "config") pod "4f875b81-0a31-4e91-a45b-f4a6ba519976" (UID: "4f875b81-0a31-4e91-a45b-f4a6ba519976"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.270464 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf" (UID: "0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.270986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f875b81-0a31-4e91-a45b-f4a6ba519976" (UID: "4f875b81-0a31-4e91-a45b-f4a6ba519976"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.274650 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f875b81-0a31-4e91-a45b-f4a6ba519976-kube-api-access-r75d2" (OuterVolumeSpecName: "kube-api-access-r75d2") pod "4f875b81-0a31-4e91-a45b-f4a6ba519976" (UID: "4f875b81-0a31-4e91-a45b-f4a6ba519976"). InnerVolumeSpecName "kube-api-access-r75d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.274744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-kube-api-access-t8fhh" (OuterVolumeSpecName: "kube-api-access-t8fhh") pod "0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf" (UID: "0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf"). InnerVolumeSpecName "kube-api-access-t8fhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.370504 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r75d2\" (UniqueName: \"kubernetes.io/projected/4f875b81-0a31-4e91-a45b-f4a6ba519976-kube-api-access-r75d2\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.370555 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.370575 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.370595 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.370608 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f875b81-0a31-4e91-a45b-f4a6ba519976-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.370619 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8fhh\" (UniqueName: \"kubernetes.io/projected/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf-kube-api-access-t8fhh\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.850193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" event={"ID":"ce63b917-9d99-4ff8-936a-b2e3dff67a94","Type":"ContainerStarted","Data":"194d024c29ed04149c5d0540a9be51637e3a23cae259ab95e585a67fdae85cc8"} Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.851990 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerStarted","Data":"60b203ac6d91e2230b67544e2eb62f4dbef70088c9d081f82150040a4d797776"} Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.854231 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.854211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-tp2jz" event={"ID":"4f875b81-0a31-4e91-a45b-f4a6ba519976","Type":"ContainerDied","Data":"6c018254c2d0e3ca2b15ae5cacbd9a7d3e0a7e168c82aa545c56af4e8c4b9650"} Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.856409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" event={"ID":"0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf","Type":"ContainerDied","Data":"9d110f19a70aff77148db4e88aba3029d8a10e4d0649ad6ae538103deb01dace"} Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.856496 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-h4fxg" Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.922787 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-tp2jz"] Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.929705 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-tp2jz"] Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.986890 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-h4fxg"] Nov 22 07:39:20 crc kubenswrapper[4858]: I1122 07:39:20.994484 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-h4fxg"] Nov 22 07:39:21 crc kubenswrapper[4858]: I1122 07:39:21.548388 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf" path="/var/lib/kubelet/pods/0f29dbfc-f3aa-465d-959a-5d1ed2daf1bf/volumes" Nov 22 07:39:21 crc kubenswrapper[4858]: I1122 07:39:21.548895 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="352f31b6-c8d5-4178-82b5-f36d2d341431" path="/var/lib/kubelet/pods/352f31b6-c8d5-4178-82b5-f36d2d341431/volumes" Nov 22 07:39:21 crc kubenswrapper[4858]: I1122 07:39:21.549392 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f875b81-0a31-4e91-a45b-f4a6ba519976" path="/var/lib/kubelet/pods/4f875b81-0a31-4e91-a45b-f4a6ba519976/volumes" Nov 22 07:39:26 crc kubenswrapper[4858]: E1122 07:39:26.688502 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.909536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8d445612-f1b5-47d6-b247-398725d6fe54","Type":"ContainerStarted","Data":"e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80"} Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.911827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e1c8353-0669-44c9-840f-d1e30d3b51eb","Type":"ContainerStarted","Data":"9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04"} Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.911956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.914799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6","Type":"ContainerStarted","Data":"3ad357e4d1993d5844d93e34adf01294d9318108dd28553ecd2102cef61ce78e"} Nov 22 07:39:26 crc kubenswrapper[4858]: E1122 07:39:26.917529 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.917688 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerID="1ec87502d31cc41b8659517a5a9a1782871638ffd5e0d536e73cd11e489d72d2" exitCode=0 Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.917812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" event={"ID":"ce63b917-9d99-4ff8-936a-b2e3dff67a94","Type":"ContainerDied","Data":"1ec87502d31cc41b8659517a5a9a1782871638ffd5e0d536e73cd11e489d72d2"} Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.922042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerStarted","Data":"feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334"} Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.925888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c" event={"ID":"4636a7e4-bda9-4b76-91ab-87ed6e121b50","Type":"ContainerStarted","Data":"090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923"} Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.926841 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rm92c" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.930807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9906e22d-4a3b-4ab7-86b7-2944b6af0f34","Type":"ContainerStarted","Data":"f52562da73839518f25e57d06af939791fd8a1949a98847efb6f708599667a5d"} Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.931235 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.938194 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.265882672 podStartE2EDuration="54.93816925s" podCreationTimestamp="2025-11-22 07:38:32 +0000 UTC" firstStartedPulling="2025-11-22 07:38:35.362517013 +0000 UTC m=+1677.203940019" lastFinishedPulling="2025-11-22 07:39:26.034803591 +0000 UTC m=+1727.876226597" observedRunningTime="2025-11-22 07:39:26.933666166 +0000 UTC m=+1728.775089182" watchObservedRunningTime="2025-11-22 07:39:26.93816925 +0000 UTC m=+1728.779592256" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.967398 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rm92c" podStartSLOduration=43.042068952 podStartE2EDuration="49.967372016s" podCreationTimestamp="2025-11-22 07:38:37 +0000 UTC" firstStartedPulling="2025-11-22 07:39:18.7148292 +0000 UTC m=+1720.556252206" lastFinishedPulling="2025-11-22 07:39:25.640132264 +0000 UTC m=+1727.481555270" observedRunningTime="2025-11-22 07:39:26.956801568 +0000 UTC m=+1728.798224584" watchObservedRunningTime="2025-11-22 07:39:26.967372016 +0000 UTC m=+1728.808795032" Nov 22 07:39:26 crc kubenswrapper[4858]: I1122 07:39:26.982874 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.035894014 podStartE2EDuration="56.982852502s" podCreationTimestamp="2025-11-22 07:38:30 +0000 UTC" firstStartedPulling="2025-11-22 07:38:32.049830516 +0000 UTC m=+1673.891253522" lastFinishedPulling="2025-11-22 07:39:25.996789004 +0000 UTC m=+1727.838212010" observedRunningTime="2025-11-22 07:39:26.980894439 +0000 UTC m=+1728.822317445" watchObservedRunningTime="2025-11-22 07:39:26.982852502 +0000 UTC m=+1728.824275508" Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.944168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" event={"ID":"ce63b917-9d99-4ff8-936a-b2e3dff67a94","Type":"ContainerStarted","Data":"e55158c45050927635aa9db9ab484583b8f882f7f462bbec9bbaa9eb32add1c5"} Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.944610 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.946524 4858 generic.go:334] "Generic (PLEG): container finished" podID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerID="feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334" exitCode=0 Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.946582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerDied","Data":"feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334"} Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.952009 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8d445612-f1b5-47d6-b247-398725d6fe54","Type":"ContainerStarted","Data":"0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a"} Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.954929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a92d321-46e4-4291-8ac3-fc8f039b3dcf","Type":"ContainerStarted","Data":"01bded6dc21a4fd246c2c6f00a02bab06b43ba88276bd0abc3233f17785ed65c"} Nov 22 07:39:27 crc kubenswrapper[4858]: E1122 07:39:27.957425 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:fc9c99eeef91523482bd8f92661b393287e1f2a24ad2ba9e33191f8de9af74cf\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" Nov 22 07:39:27 crc kubenswrapper[4858]: I1122 07:39:27.975579 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podStartSLOduration=45.111755057 podStartE2EDuration="50.975550474s" podCreationTimestamp="2025-11-22 07:38:37 +0000 UTC" firstStartedPulling="2025-11-22 07:39:20.142742788 +0000 UTC m=+1721.984165794" lastFinishedPulling="2025-11-22 07:39:26.006538195 +0000 UTC m=+1727.847961211" observedRunningTime="2025-11-22 07:39:27.970364218 +0000 UTC m=+1729.811787224" watchObservedRunningTime="2025-11-22 07:39:27.975550474 +0000 UTC m=+1729.816973480" Nov 22 07:39:28 crc kubenswrapper[4858]: I1122 07:39:28.012225 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=45.76688072 podStartE2EDuration="49.012195958s" podCreationTimestamp="2025-11-22 07:38:39 +0000 UTC" firstStartedPulling="2025-11-22 07:39:18.349253504 +0000 UTC m=+1720.190676520" lastFinishedPulling="2025-11-22 07:39:21.594568752 +0000 UTC m=+1723.435991758" observedRunningTime="2025-11-22 07:39:28.005571885 +0000 UTC m=+1729.846994891" watchObservedRunningTime="2025-11-22 07:39:28.012195958 +0000 UTC m=+1729.853618964" Nov 22 07:39:28 crc kubenswrapper[4858]: I1122 07:39:28.428672 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:28 crc kubenswrapper[4858]: I1122 07:39:28.965830 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerStarted","Data":"ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581"} Nov 22 07:39:29 crc kubenswrapper[4858]: I1122 07:39:29.981218 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerStarted","Data":"6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105"} Nov 22 07:39:29 crc kubenswrapper[4858]: I1122 07:39:29.981560 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:39:29 crc kubenswrapper[4858]: I1122 07:39:29.981587 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.006791 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xbvdl" podStartSLOduration=47.16676587 podStartE2EDuration="53.006766065s" podCreationTimestamp="2025-11-22 07:38:37 +0000 UTC" firstStartedPulling="2025-11-22 07:39:20.156129327 +0000 UTC m=+1721.997552333" lastFinishedPulling="2025-11-22 07:39:25.996129522 +0000 UTC m=+1727.837552528" observedRunningTime="2025-11-22 07:39:30.0035193 +0000 UTC m=+1731.844942326" watchObservedRunningTime="2025-11-22 07:39:30.006766065 +0000 UTC m=+1731.848189071" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.428398 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.563364 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fpwcs"] Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.565922 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.570886 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.633368 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fpwcs"] Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.657945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmwc\" (UniqueName: \"kubernetes.io/projected/56c36de6-d90c-48e1-bfda-466b3818ed61-kube-api-access-7nmwc\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.658150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovs-rundir\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.658281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovn-rundir\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.658418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c36de6-d90c-48e1-bfda-466b3818ed61-config\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.658480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.658519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-combined-ca-bundle\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.761376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovs-rundir\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.761463 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovn-rundir\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.761516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c36de6-d90c-48e1-bfda-466b3818ed61-config\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.761546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.761576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-combined-ca-bundle\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.761629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmwc\" (UniqueName: \"kubernetes.io/projected/56c36de6-d90c-48e1-bfda-466b3818ed61-kube-api-access-7nmwc\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.762408 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovs-rundir\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.762610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovn-rundir\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.763148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c36de6-d90c-48e1-bfda-466b3818ed61-config\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.776954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-combined-ca-bundle\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.780897 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.786560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmwc\" (UniqueName: \"kubernetes.io/projected/56c36de6-d90c-48e1-bfda-466b3818ed61-kube-api-access-7nmwc\") pod \"ovn-controller-metrics-fpwcs\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.847643 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-97xgk"] Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.888519 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-xcn8s"] Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.891053 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.893929 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.926271 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-xcn8s"] Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.966509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.966622 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.966666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x822c\" (UniqueName: \"kubernetes.io/projected/d868e587-35fd-4fde-9db6-19f7dfa055e3-kube-api-access-x822c\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.966694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.966733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-config\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.980482 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:39:30 crc kubenswrapper[4858]: I1122 07:39:30.991676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4ec286aa-6594-4e36-b307-c8ffaa0e59de","Type":"ContainerStarted","Data":"749e7f842b25d66763df85de32ae258db50242c10ba859d9ba25bc43fedc493f"} Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.072334 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.072478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.072546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x822c\" (UniqueName: \"kubernetes.io/projected/d868e587-35fd-4fde-9db6-19f7dfa055e3-kube-api-access-x822c\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.072606 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.072769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-config\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.073884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.074581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.074849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-config\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.075013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.102540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x822c\" (UniqueName: \"kubernetes.io/projected/d868e587-35fd-4fde-9db6-19f7dfa055e3-kube-api-access-x822c\") pod \"dnsmasq-dns-5c476d78c5-xcn8s\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.219517 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.223526 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.276003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-config\") pod \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.276162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-dns-svc\") pod \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.276251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vq7j\" (UniqueName: \"kubernetes.io/projected/65c972e4-1af3-48ac-af6c-2e65080ed8b5-kube-api-access-7vq7j\") pod \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\" (UID: \"65c972e4-1af3-48ac-af6c-2e65080ed8b5\") " Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.276726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-config" (OuterVolumeSpecName: "config") pod "65c972e4-1af3-48ac-af6c-2e65080ed8b5" (UID: "65c972e4-1af3-48ac-af6c-2e65080ed8b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.277182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65c972e4-1af3-48ac-af6c-2e65080ed8b5" (UID: "65c972e4-1af3-48ac-af6c-2e65080ed8b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.371609 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fpwcs"] Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.378694 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.378759 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65c972e4-1af3-48ac-af6c-2e65080ed8b5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.454000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c972e4-1af3-48ac-af6c-2e65080ed8b5-kube-api-access-7vq7j" (OuterVolumeSpecName: "kube-api-access-7vq7j") pod "65c972e4-1af3-48ac-af6c-2e65080ed8b5" (UID: "65c972e4-1af3-48ac-af6c-2e65080ed8b5"). InnerVolumeSpecName "kube-api-access-7vq7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.479898 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vq7j\" (UniqueName: \"kubernetes.io/projected/65c972e4-1af3-48ac-af6c-2e65080ed8b5-kube-api-access-7vq7j\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.500590 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:31 crc kubenswrapper[4858]: I1122 07:39:31.846393 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-xcn8s"] Nov 22 07:39:31 crc kubenswrapper[4858]: W1122 07:39:31.861023 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd868e587_35fd_4fde_9db6_19f7dfa055e3.slice/crio-77a454aad598682c434ab0deffdd40914c1b7ce85e2ea36024c043912bd00807 WatchSource:0}: Error finding container 77a454aad598682c434ab0deffdd40914c1b7ce85e2ea36024c043912bd00807: Status 404 returned error can't find the container with id 77a454aad598682c434ab0deffdd40914c1b7ce85e2ea36024c043912bd00807 Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.009367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" event={"ID":"65c972e4-1af3-48ac-af6c-2e65080ed8b5","Type":"ContainerDied","Data":"86e99f913047b527304e9ed7a33dc9598e04c2101aee6eda483b82154f30b9e9"} Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.009425 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-97xgk" Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.013771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" event={"ID":"d868e587-35fd-4fde-9db6-19f7dfa055e3","Type":"ContainerStarted","Data":"77a454aad598682c434ab0deffdd40914c1b7ce85e2ea36024c043912bd00807"} Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.015283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fpwcs" event={"ID":"56c36de6-d90c-48e1-bfda-466b3818ed61","Type":"ContainerStarted","Data":"a6a5144c8bf6ebe7111561582ed87111819f209ed3451b8464f722f9db2ae3c2"} Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.015368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fpwcs" event={"ID":"56c36de6-d90c-48e1-bfda-466b3818ed61","Type":"ContainerStarted","Data":"ca0bc89ac6cfbb7b139b036afb7baabc42ddbb83b2183f82f8736aa83e047d2c"} Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.023291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d92662c9-980a-41b0-ad01-bbb1cdaf864b","Type":"ContainerStarted","Data":"162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550"} Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.042789 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fpwcs" podStartSLOduration=2.042759749 podStartE2EDuration="2.042759749s" podCreationTimestamp="2025-11-22 07:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:32.038248055 +0000 UTC m=+1733.879671061" watchObservedRunningTime="2025-11-22 07:39:32.042759749 +0000 UTC m=+1733.884182765" Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.084603 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.167621 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-97xgk"] Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.185281 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-97xgk"] Nov 22 07:39:32 crc kubenswrapper[4858]: I1122 07:39:32.538644 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:39:32 crc kubenswrapper[4858]: E1122 07:39:32.538952 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:39:33 crc kubenswrapper[4858]: I1122 07:39:33.034377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb1a203-c5d9-4ba5-b31b-c6134963af46","Type":"ContainerStarted","Data":"6325a643277214e6820d9d23f8b64430ed2c31d44677509064e322f3ad0b9c22"} Nov 22 07:39:33 crc kubenswrapper[4858]: I1122 07:39:33.038527 4858 generic.go:334] "Generic (PLEG): container finished" podID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerID="4d3ab0076e383f6fcf1b6e2a3cd1c529bcd6b4a985753e32a4c15af30ebda81f" exitCode=0 Nov 22 07:39:33 crc kubenswrapper[4858]: I1122 07:39:33.039555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" event={"ID":"d868e587-35fd-4fde-9db6-19f7dfa055e3","Type":"ContainerDied","Data":"4d3ab0076e383f6fcf1b6e2a3cd1c529bcd6b4a985753e32a4c15af30ebda81f"} Nov 22 07:39:33 crc kubenswrapper[4858]: I1122 07:39:33.124407 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 07:39:33 crc kubenswrapper[4858]: I1122 07:39:33.462807 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:39:33 crc kubenswrapper[4858]: I1122 07:39:33.548148 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c972e4-1af3-48ac-af6c-2e65080ed8b5" path="/var/lib/kubelet/pods/65c972e4-1af3-48ac-af6c-2e65080ed8b5/volumes" Nov 22 07:39:34 crc kubenswrapper[4858]: I1122 07:39:34.051583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" event={"ID":"d868e587-35fd-4fde-9db6-19f7dfa055e3","Type":"ContainerStarted","Data":"52a540c4b96bacbe908d815fc8802c36c28cfacfb5f3579970dbd564a0717de9"} Nov 22 07:39:34 crc kubenswrapper[4858]: I1122 07:39:34.052037 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:34 crc kubenswrapper[4858]: I1122 07:39:34.076804 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" podStartSLOduration=4.07677945 podStartE2EDuration="4.07677945s" podCreationTimestamp="2025-11-22 07:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:34.073174595 +0000 UTC m=+1735.914597611" watchObservedRunningTime="2025-11-22 07:39:34.07677945 +0000 UTC m=+1735.918202456" Nov 22 07:39:35 crc kubenswrapper[4858]: I1122 07:39:35.860890 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 07:39:36 crc kubenswrapper[4858]: I1122 07:39:36.070794 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerID="749e7f842b25d66763df85de32ae258db50242c10ba859d9ba25bc43fedc493f" exitCode=0 Nov 22 07:39:36 crc kubenswrapper[4858]: I1122 07:39:36.070948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4ec286aa-6594-4e36-b307-c8ffaa0e59de","Type":"ContainerDied","Data":"749e7f842b25d66763df85de32ae258db50242c10ba859d9ba25bc43fedc493f"} Nov 22 07:39:37 crc kubenswrapper[4858]: I1122 07:39:37.080393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4ec286aa-6594-4e36-b307-c8ffaa0e59de","Type":"ContainerStarted","Data":"a35f97adc654f8d53512934ced68b20cadeb39ebe2016eef17d8e1859247bf90"} Nov 22 07:39:37 crc kubenswrapper[4858]: I1122 07:39:37.083128 4858 generic.go:334] "Generic (PLEG): container finished" podID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerID="162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550" exitCode=0 Nov 22 07:39:37 crc kubenswrapper[4858]: I1122 07:39:37.083188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d92662c9-980a-41b0-ad01-bbb1cdaf864b","Type":"ContainerDied","Data":"162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550"} Nov 22 07:39:37 crc kubenswrapper[4858]: I1122 07:39:37.103688 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.65525926 podStartE2EDuration="1m10.103666539s" podCreationTimestamp="2025-11-22 07:38:27 +0000 UTC" firstStartedPulling="2025-11-22 07:38:29.768262772 +0000 UTC m=+1671.609685778" lastFinishedPulling="2025-11-22 07:39:30.216670051 +0000 UTC m=+1732.058093057" observedRunningTime="2025-11-22 07:39:37.102104449 +0000 UTC m=+1738.943527465" watchObservedRunningTime="2025-11-22 07:39:37.103666539 +0000 UTC m=+1738.945089545" Nov 22 07:39:38 crc kubenswrapper[4858]: I1122 07:39:38.094212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d92662c9-980a-41b0-ad01-bbb1cdaf864b","Type":"ContainerStarted","Data":"2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924"} Nov 22 07:39:38 crc kubenswrapper[4858]: I1122 07:39:38.118227 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371966.736572 podStartE2EDuration="1m10.11820363s" podCreationTimestamp="2025-11-22 07:38:28 +0000 UTC" firstStartedPulling="2025-11-22 07:38:32.022706847 +0000 UTC m=+1673.864129853" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:38.116418464 +0000 UTC m=+1739.957841490" watchObservedRunningTime="2025-11-22 07:39:38.11820363 +0000 UTC m=+1739.959626636" Nov 22 07:39:38 crc kubenswrapper[4858]: I1122 07:39:38.967698 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 07:39:38 crc kubenswrapper[4858]: I1122 07:39:38.967836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 07:39:40 crc kubenswrapper[4858]: I1122 07:39:40.286239 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 07:39:40 crc kubenswrapper[4858]: I1122 07:39:40.286664 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 07:39:41 crc kubenswrapper[4858]: I1122 07:39:41.220600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:39:41 crc kubenswrapper[4858]: I1122 07:39:41.293981 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-vhh69"] Nov 22 07:39:41 crc kubenswrapper[4858]: I1122 07:39:41.294265 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" containerID="cri-o://e55158c45050927635aa9db9ab484583b8f882f7f462bbec9bbaa9eb32add1c5" gracePeriod=10 Nov 22 07:39:42 crc kubenswrapper[4858]: I1122 07:39:42.131856 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerID="e55158c45050927635aa9db9ab484583b8f882f7f462bbec9bbaa9eb32add1c5" exitCode=0 Nov 22 07:39:42 crc kubenswrapper[4858]: I1122 07:39:42.132114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" event={"ID":"ce63b917-9d99-4ff8-936a-b2e3dff67a94","Type":"ContainerDied","Data":"e55158c45050927635aa9db9ab484583b8f882f7f462bbec9bbaa9eb32add1c5"} Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.309272 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-58wfb"] Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.318992 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.335891 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-58wfb"] Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.384415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.384802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.384963 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5cd\" (UniqueName: \"kubernetes.io/projected/1f00ad69-2781-482a-aa97-43bfe1f33f76-kube-api-access-5g5cd\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.385085 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.385199 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-config\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.460787 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.487541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.488947 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.489032 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5cd\" (UniqueName: \"kubernetes.io/projected/1f00ad69-2781-482a-aa97-43bfe1f33f76-kube-api-access-5g5cd\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.489067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.489101 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-config\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.490687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-config\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.491264 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.492196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.494413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.516010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5cd\" (UniqueName: \"kubernetes.io/projected/1f00ad69-2781-482a-aa97-43bfe1f33f76-kube-api-access-5g5cd\") pod \"dnsmasq-dns-5c9fdb784c-58wfb\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:43 crc kubenswrapper[4858]: I1122 07:39:43.640020 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.134526 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-58wfb"] Nov 22 07:39:44 crc kubenswrapper[4858]: W1122 07:39:44.141521 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f00ad69_2781_482a_aa97_43bfe1f33f76.slice/crio-359bb93f6ef96b1c15c05fbea2aadd3db28b80e7fbece8dd84b61e69e46f103b WatchSource:0}: Error finding container 359bb93f6ef96b1c15c05fbea2aadd3db28b80e7fbece8dd84b61e69e46f103b: Status 404 returned error can't find the container with id 359bb93f6ef96b1c15c05fbea2aadd3db28b80e7fbece8dd84b61e69e46f103b Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.154789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" event={"ID":"1f00ad69-2781-482a-aa97-43bfe1f33f76","Type":"ContainerStarted","Data":"359bb93f6ef96b1c15c05fbea2aadd3db28b80e7fbece8dd84b61e69e46f103b"} Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.478077 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.484467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.488979 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qf5t9" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.489020 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.489440 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.495647 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.504149 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.612012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.612158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn88k\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-kube-api-access-hn88k\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.612225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.612248 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-lock\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.612272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-cache\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.714362 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.714591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn88k\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-kube-api-access-hn88k\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.714684 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.714742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-lock\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.714804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-cache\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.715156 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: E1122 07:39:44.715365 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:39:44 crc kubenswrapper[4858]: E1122 07:39:44.715604 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:39:44 crc kubenswrapper[4858]: E1122 07:39:44.715654 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:45.215638069 +0000 UTC m=+1747.057061075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.715821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-lock\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.715447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-cache\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.737664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn88k\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-kube-api-access-hn88k\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:44 crc kubenswrapper[4858]: I1122 07:39:44.738899 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.157445 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-czmj7"] Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.159544 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.163237 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.163264 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.174337 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.177593 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-czmj7"] Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.223740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:45 crc kubenswrapper[4858]: E1122 07:39:45.224036 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:39:45 crc kubenswrapper[4858]: E1122 07:39:45.224084 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:39:45 crc kubenswrapper[4858]: E1122 07:39:45.224165 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:46.224139644 +0000 UTC m=+1748.065562650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.325899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-swiftconf\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.325971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-combined-ca-bundle\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.326153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkb56\" (UniqueName: \"kubernetes.io/projected/98e3f90c-3676-41ee-ab2d-f0dca9196a02-kube-api-access-pkb56\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.326303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-scripts\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.326467 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-ring-data-devices\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.326570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98e3f90c-3676-41ee-ab2d-f0dca9196a02-etc-swift\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.326657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-dispersionconf\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-ring-data-devices\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98e3f90c-3676-41ee-ab2d-f0dca9196a02-etc-swift\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-dispersionconf\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-swiftconf\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-combined-ca-bundle\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkb56\" (UniqueName: \"kubernetes.io/projected/98e3f90c-3676-41ee-ab2d-f0dca9196a02-kube-api-access-pkb56\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.429655 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-scripts\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.430459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-ring-data-devices\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.430744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98e3f90c-3676-41ee-ab2d-f0dca9196a02-etc-swift\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.431276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-scripts\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.435219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-combined-ca-bundle\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.435618 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-dispersionconf\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.438605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-swiftconf\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.456162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkb56\" (UniqueName: \"kubernetes.io/projected/98e3f90c-3676-41ee-ab2d-f0dca9196a02-kube-api-access-pkb56\") pod \"swift-ring-rebalance-czmj7\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.489761 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:39:45 crc kubenswrapper[4858]: I1122 07:39:45.970719 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-czmj7"] Nov 22 07:39:45 crc kubenswrapper[4858]: W1122 07:39:45.976493 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98e3f90c_3676_41ee_ab2d_f0dca9196a02.slice/crio-9c4680d6a6b703f93f1cd4385df87f8b2f7ae3bc953047a565582a09c03d5a89 WatchSource:0}: Error finding container 9c4680d6a6b703f93f1cd4385df87f8b2f7ae3bc953047a565582a09c03d5a89: Status 404 returned error can't find the container with id 9c4680d6a6b703f93f1cd4385df87f8b2f7ae3bc953047a565582a09c03d5a89 Nov 22 07:39:46 crc kubenswrapper[4858]: I1122 07:39:46.172608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-czmj7" event={"ID":"98e3f90c-3676-41ee-ab2d-f0dca9196a02","Type":"ContainerStarted","Data":"9c4680d6a6b703f93f1cd4385df87f8b2f7ae3bc953047a565582a09c03d5a89"} Nov 22 07:39:46 crc kubenswrapper[4858]: I1122 07:39:46.265350 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:46 crc kubenswrapper[4858]: E1122 07:39:46.265587 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:39:46 crc kubenswrapper[4858]: E1122 07:39:46.265610 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:39:46 crc kubenswrapper[4858]: E1122 07:39:46.265674 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:48.26565467 +0000 UTC m=+1750.107077676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:39:46 crc kubenswrapper[4858]: I1122 07:39:46.536236 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:39:46 crc kubenswrapper[4858]: E1122 07:39:46.537917 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:39:48 crc kubenswrapper[4858]: I1122 07:39:48.301771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:48 crc kubenswrapper[4858]: E1122 07:39:48.302025 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:39:48 crc kubenswrapper[4858]: E1122 07:39:48.302072 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:39:48 crc kubenswrapper[4858]: E1122 07:39:48.302147 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:52.302118588 +0000 UTC m=+1754.143541594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:39:48 crc kubenswrapper[4858]: I1122 07:39:48.461181 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 22 07:39:52 crc kubenswrapper[4858]: I1122 07:39:52.372502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:39:52 crc kubenswrapper[4858]: E1122 07:39:52.372742 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:39:52 crc kubenswrapper[4858]: E1122 07:39:52.373141 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:39:52 crc kubenswrapper[4858]: E1122 07:39:52.373232 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:40:00.373207198 +0000 UTC m=+1762.214630204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:39:53 crc kubenswrapper[4858]: I1122 07:39:53.460790 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 22 07:39:53 crc kubenswrapper[4858]: I1122 07:39:53.460919 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:39:58 crc kubenswrapper[4858]: I1122 07:39:58.389714 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rm92c" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:39:58 crc kubenswrapper[4858]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 07:39:58 crc kubenswrapper[4858]: > Nov 22 07:39:58 crc kubenswrapper[4858]: I1122 07:39:58.432170 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:39:58 crc kubenswrapper[4858]: I1122 07:39:58.462995 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 22 07:39:58 crc kubenswrapper[4858]: I1122 07:39:58.536682 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:39:58 crc kubenswrapper[4858]: E1122 07:39:58.537056 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:40:00 crc kubenswrapper[4858]: I1122 07:40:00.414467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:40:00 crc kubenswrapper[4858]: E1122 07:40:00.414677 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:40:00 crc kubenswrapper[4858]: E1122 07:40:00.414998 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:40:00 crc kubenswrapper[4858]: E1122 07:40:00.415074 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:40:16.415052644 +0000 UTC m=+1778.256475650 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:40:01 crc kubenswrapper[4858]: I1122 07:40:01.296542 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerID="01bded6dc21a4fd246c2c6f00a02bab06b43ba88276bd0abc3233f17785ed65c" exitCode=0 Nov 22 07:40:01 crc kubenswrapper[4858]: I1122 07:40:01.296606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a92d321-46e4-4291-8ac3-fc8f039b3dcf","Type":"ContainerDied","Data":"01bded6dc21a4fd246c2c6f00a02bab06b43ba88276bd0abc3233f17785ed65c"} Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.394591 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rm92c" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:40:03 crc kubenswrapper[4858]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 07:40:03 crc kubenswrapper[4858]: > Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.431865 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.661816 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rm92c-config-q7gww"] Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.663346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.666566 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.671235 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rm92c-config-q7gww"] Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.688154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-log-ovn\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.688218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.688273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g98g\" (UniqueName: \"kubernetes.io/projected/e11b8318-5258-4f86-96bf-11eb26490e55-kube-api-access-6g98g\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.688455 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run-ovn\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.688517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-scripts\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.688556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-additional-scripts\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.792180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run-ovn\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.792740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-scripts\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.792776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-additional-scripts\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.792902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-log-ovn\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.793016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.793049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g98g\" (UniqueName: \"kubernetes.io/projected/e11b8318-5258-4f86-96bf-11eb26490e55-kube-api-access-6g98g\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.793708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-log-ovn\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.793768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.792638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run-ovn\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.794312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-additional-scripts\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.796301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-scripts\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.817277 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g98g\" (UniqueName: \"kubernetes.io/projected/e11b8318-5258-4f86-96bf-11eb26490e55-kube-api-access-6g98g\") pod \"ovn-controller-rm92c-config-q7gww\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:03 crc kubenswrapper[4858]: I1122 07:40:03.992736 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:04 crc kubenswrapper[4858]: I1122 07:40:04.330781 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerID="87fe90dc85bee746441fd67018989c515e806015d6bbac9627638115bc9f8c88" exitCode=0 Nov 22 07:40:04 crc kubenswrapper[4858]: I1122 07:40:04.330829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" event={"ID":"1f00ad69-2781-482a-aa97-43bfe1f33f76","Type":"ContainerDied","Data":"87fe90dc85bee746441fd67018989c515e806015d6bbac9627638115bc9f8c88"} Nov 22 07:40:04 crc kubenswrapper[4858]: I1122 07:40:04.990813 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.076226 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="galera" probeResult="failure" output=< Nov 22 07:40:05 crc kubenswrapper[4858]: wsrep_local_state_comment (Joined) differs from Synced Nov 22 07:40:05 crc kubenswrapper[4858]: > Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.340920 4858 generic.go:334] "Generic (PLEG): container finished" podID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerID="6325a643277214e6820d9d23f8b64430ed2c31d44677509064e322f3ad0b9c22" exitCode=0 Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.341000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb1a203-c5d9-4ba5-b31b-c6134963af46","Type":"ContainerDied","Data":"6325a643277214e6820d9d23f8b64430ed2c31d44677509064e322f3ad0b9c22"} Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.344010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" event={"ID":"ce63b917-9d99-4ff8-936a-b2e3dff67a94","Type":"ContainerDied","Data":"194d024c29ed04149c5d0540a9be51637e3a23cae259ab95e585a67fdae85cc8"} Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.344054 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="194d024c29ed04149c5d0540a9be51637e3a23cae259ab95e585a67fdae85cc8" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.398293 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.423494 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-config\") pod \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.423587 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5kt4\" (UniqueName: \"kubernetes.io/projected/ce63b917-9d99-4ff8-936a-b2e3dff67a94-kube-api-access-w5kt4\") pod \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.423659 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-ovsdbserver-nb\") pod \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.423730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-dns-svc\") pod \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\" (UID: \"ce63b917-9d99-4ff8-936a-b2e3dff67a94\") " Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.448968 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce63b917-9d99-4ff8-936a-b2e3dff67a94-kube-api-access-w5kt4" (OuterVolumeSpecName: "kube-api-access-w5kt4") pod "ce63b917-9d99-4ff8-936a-b2e3dff67a94" (UID: "ce63b917-9d99-4ff8-936a-b2e3dff67a94"). InnerVolumeSpecName "kube-api-access-w5kt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.480311 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce63b917-9d99-4ff8-936a-b2e3dff67a94" (UID: "ce63b917-9d99-4ff8-936a-b2e3dff67a94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.480422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce63b917-9d99-4ff8-936a-b2e3dff67a94" (UID: "ce63b917-9d99-4ff8-936a-b2e3dff67a94"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.481982 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-config" (OuterVolumeSpecName: "config") pod "ce63b917-9d99-4ff8-936a-b2e3dff67a94" (UID: "ce63b917-9d99-4ff8-936a-b2e3dff67a94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.527477 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.527745 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.527850 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5kt4\" (UniqueName: \"kubernetes.io/projected/ce63b917-9d99-4ff8-936a-b2e3dff67a94-kube-api-access-w5kt4\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:05 crc kubenswrapper[4858]: I1122 07:40:05.527954 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce63b917-9d99-4ff8-936a-b2e3dff67a94-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:06 crc kubenswrapper[4858]: I1122 07:40:06.352141 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" Nov 22 07:40:06 crc kubenswrapper[4858]: I1122 07:40:06.391477 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-vhh69"] Nov 22 07:40:06 crc kubenswrapper[4858]: I1122 07:40:06.398199 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-vhh69"] Nov 22 07:40:07 crc kubenswrapper[4858]: E1122 07:40:07.407250 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:a80a074e227d3238bb6f285788a9e886ae7a5909ccbc5c19c93c369bdfe5b3b8" Nov 22 07:40:07 crc kubenswrapper[4858]: E1122 07:40:07.407908 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:a80a074e227d3238bb6f285788a9e886ae7a5909ccbc5c19c93c369bdfe5b3b8,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:ad0e093d-749e-4a6a-a655-9af559e7c3a0,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkb56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-czmj7_openstack(98e3f90c-3676-41ee-ab2d-f0dca9196a02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:40:07 crc kubenswrapper[4858]: E1122 07:40:07.409175 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-czmj7" podUID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" Nov 22 07:40:07 crc kubenswrapper[4858]: I1122 07:40:07.548221 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" path="/var/lib/kubelet/pods/ce63b917-9d99-4ff8-936a-b2e3dff67a94/volumes" Nov 22 07:40:07 crc kubenswrapper[4858]: I1122 07:40:07.936362 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rm92c-config-q7gww"] Nov 22 07:40:07 crc kubenswrapper[4858]: W1122 07:40:07.943646 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode11b8318_5258_4f86_96bf_11eb26490e55.slice/crio-0612ab1112e1706388cc0188bcb44fae6a0e8b9c3c219322cf16a11e2c9251d4 WatchSource:0}: Error finding container 0612ab1112e1706388cc0188bcb44fae6a0e8b9c3c219322cf16a11e2c9251d4: Status 404 returned error can't find the container with id 0612ab1112e1706388cc0188bcb44fae6a0e8b9c3c219322cf16a11e2c9251d4 Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.257716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.347906 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="galera" probeResult="failure" output=< Nov 22 07:40:08 crc kubenswrapper[4858]: wsrep_local_state_comment (Joined) differs from Synced Nov 22 07:40:08 crc kubenswrapper[4858]: > Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.369561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c-config-q7gww" event={"ID":"e11b8318-5258-4f86-96bf-11eb26490e55","Type":"ContainerStarted","Data":"12f4a71e97591d1d56eaa1899d33619999b0a2be882cfab84c897a7b44f91342"} Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.369632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c-config-q7gww" event={"ID":"e11b8318-5258-4f86-96bf-11eb26490e55","Type":"ContainerStarted","Data":"0612ab1112e1706388cc0188bcb44fae6a0e8b9c3c219322cf16a11e2c9251d4"} Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.373289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" event={"ID":"1f00ad69-2781-482a-aa97-43bfe1f33f76","Type":"ContainerStarted","Data":"4312d0a90cc8263244016ad6ab9b8e2c56f8007d2ada5a13da0a81e22caa7617"} Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.373533 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.376191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a92d321-46e4-4291-8ac3-fc8f039b3dcf","Type":"ContainerStarted","Data":"fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff"} Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.377115 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.395399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6","Type":"ContainerStarted","Data":"b418a78ed1ffafc15b6ad4bd4c7badd60596b1b40cbf746a619168a2e1a176d2"} Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.402412 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rm92c-config-q7gww" podStartSLOduration=5.40238597 podStartE2EDuration="5.40238597s" podCreationTimestamp="2025-11-22 07:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:08.394812208 +0000 UTC m=+1770.236235214" watchObservedRunningTime="2025-11-22 07:40:08.40238597 +0000 UTC m=+1770.243808986" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.404443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb1a203-c5d9-4ba5-b31b-c6134963af46","Type":"ContainerStarted","Data":"87dc9b2e06bc62a486c9c4668b5e0075930637436dc360e930cf4a1288e9f350"} Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.405782 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 07:40:08 crc kubenswrapper[4858]: E1122 07:40:08.406497 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:a80a074e227d3238bb6f285788a9e886ae7a5909ccbc5c19c93c369bdfe5b3b8\\\"\"" pod="openstack/swift-ring-rebalance-czmj7" podUID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.412174 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rm92c" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.441789 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podStartSLOduration=25.441763512 podStartE2EDuration="25.441763512s" podCreationTimestamp="2025-11-22 07:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:08.437761735 +0000 UTC m=+1770.279184741" watchObservedRunningTime="2025-11-22 07:40:08.441763512 +0000 UTC m=+1770.283186518" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.461197 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c65c5f57f-vhh69" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.477105 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.706759024 podStartE2EDuration="1m42.477083795s" podCreationTimestamp="2025-11-22 07:38:26 +0000 UTC" firstStartedPulling="2025-11-22 07:38:28.82422114 +0000 UTC m=+1670.665644146" lastFinishedPulling="2025-11-22 07:39:21.594545911 +0000 UTC m=+1723.435968917" observedRunningTime="2025-11-22 07:40:08.469936385 +0000 UTC m=+1770.311359401" watchObservedRunningTime="2025-11-22 07:40:08.477083795 +0000 UTC m=+1770.318506801" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.532519 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371933.32228 podStartE2EDuration="1m43.53249545s" podCreationTimestamp="2025-11-22 07:38:25 +0000 UTC" firstStartedPulling="2025-11-22 07:38:28.166908425 +0000 UTC m=+1670.008331431" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:08.524268177 +0000 UTC m=+1770.365691203" watchObservedRunningTime="2025-11-22 07:40:08.53249545 +0000 UTC m=+1770.373918456" Nov 22 07:40:08 crc kubenswrapper[4858]: I1122 07:40:08.579656 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.572637773 podStartE2EDuration="1m34.579628541s" podCreationTimestamp="2025-11-22 07:38:34 +0000 UTC" firstStartedPulling="2025-11-22 07:38:37.512094417 +0000 UTC m=+1679.353517423" lastFinishedPulling="2025-11-22 07:40:07.519085185 +0000 UTC m=+1769.360508191" observedRunningTime="2025-11-22 07:40:08.571386566 +0000 UTC m=+1770.412809582" watchObservedRunningTime="2025-11-22 07:40:08.579628541 +0000 UTC m=+1770.421051547" Nov 22 07:40:09 crc kubenswrapper[4858]: I1122 07:40:09.067401 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 07:40:09 crc kubenswrapper[4858]: I1122 07:40:09.281850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 07:40:09 crc kubenswrapper[4858]: I1122 07:40:09.414688 4858 generic.go:334] "Generic (PLEG): container finished" podID="e11b8318-5258-4f86-96bf-11eb26490e55" containerID="12f4a71e97591d1d56eaa1899d33619999b0a2be882cfab84c897a7b44f91342" exitCode=0 Nov 22 07:40:09 crc kubenswrapper[4858]: I1122 07:40:09.415731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c-config-q7gww" event={"ID":"e11b8318-5258-4f86-96bf-11eb26490e55","Type":"ContainerDied","Data":"12f4a71e97591d1d56eaa1899d33619999b0a2be882cfab84c897a7b44f91342"} Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.361831 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.493052 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xgxrd"] Nov 22 07:40:10 crc kubenswrapper[4858]: E1122 07:40:10.493540 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.493558 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" Nov 22 07:40:10 crc kubenswrapper[4858]: E1122 07:40:10.493583 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="init" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.493590 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="init" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.493788 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce63b917-9d99-4ff8-936a-b2e3dff67a94" containerName="dnsmasq-dns" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.494531 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.505819 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xgxrd"] Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.542277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66j2\" (UniqueName: \"kubernetes.io/projected/bd99df9d-2d5a-4997-b876-4573a931ee39-kube-api-access-p66j2\") pod \"keystone-db-create-xgxrd\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.542409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd99df9d-2d5a-4997-b876-4573a931ee39-operator-scripts\") pod \"keystone-db-create-xgxrd\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.635236 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cf57-account-create-cz7dj"] Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.636422 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.644318 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p66j2\" (UniqueName: \"kubernetes.io/projected/bd99df9d-2d5a-4997-b876-4573a931ee39-kube-api-access-p66j2\") pod \"keystone-db-create-xgxrd\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.645849 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd99df9d-2d5a-4997-b876-4573a931ee39-operator-scripts\") pod \"keystone-db-create-xgxrd\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.646871 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd99df9d-2d5a-4997-b876-4573a931ee39-operator-scripts\") pod \"keystone-db-create-xgxrd\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.651971 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.658872 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cf57-account-create-cz7dj"] Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.694427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66j2\" (UniqueName: \"kubernetes.io/projected/bd99df9d-2d5a-4997-b876-4573a931ee39-kube-api-access-p66j2\") pod \"keystone-db-create-xgxrd\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.749694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlnnt\" (UniqueName: \"kubernetes.io/projected/725a427f-782d-4d51-95f9-24ff18fe1591-kube-api-access-nlnnt\") pod \"keystone-cf57-account-create-cz7dj\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.749960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725a427f-782d-4d51-95f9-24ff18fe1591-operator-scripts\") pod \"keystone-cf57-account-create-cz7dj\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.852023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725a427f-782d-4d51-95f9-24ff18fe1591-operator-scripts\") pod \"keystone-cf57-account-create-cz7dj\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.852090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlnnt\" (UniqueName: \"kubernetes.io/projected/725a427f-782d-4d51-95f9-24ff18fe1591-kube-api-access-nlnnt\") pod \"keystone-cf57-account-create-cz7dj\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.853388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725a427f-782d-4d51-95f9-24ff18fe1591-operator-scripts\") pod \"keystone-cf57-account-create-cz7dj\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.854106 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.854212 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.876727 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlnnt\" (UniqueName: \"kubernetes.io/projected/725a427f-782d-4d51-95f9-24ff18fe1591-kube-api-access-nlnnt\") pod \"keystone-cf57-account-create-cz7dj\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.946255 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lc8rx"] Nov 22 07:40:10 crc kubenswrapper[4858]: E1122 07:40:10.946691 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11b8318-5258-4f86-96bf-11eb26490e55" containerName="ovn-config" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.946720 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11b8318-5258-4f86-96bf-11eb26490e55" containerName="ovn-config" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.946952 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11b8318-5258-4f86-96bf-11eb26490e55" containerName="ovn-config" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.947668 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954100 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-additional-scripts\") pod \"e11b8318-5258-4f86-96bf-11eb26490e55\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-scripts\") pod \"e11b8318-5258-4f86-96bf-11eb26490e55\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954213 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g98g\" (UniqueName: \"kubernetes.io/projected/e11b8318-5258-4f86-96bf-11eb26490e55-kube-api-access-6g98g\") pod \"e11b8318-5258-4f86-96bf-11eb26490e55\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954247 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run\") pod \"e11b8318-5258-4f86-96bf-11eb26490e55\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-log-ovn\") pod \"e11b8318-5258-4f86-96bf-11eb26490e55\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954315 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run-ovn\") pod \"e11b8318-5258-4f86-96bf-11eb26490e55\" (UID: \"e11b8318-5258-4f86-96bf-11eb26490e55\") " Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954787 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e11b8318-5258-4f86-96bf-11eb26490e55" (UID: "e11b8318-5258-4f86-96bf-11eb26490e55"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954922 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e11b8318-5258-4f86-96bf-11eb26490e55" (UID: "e11b8318-5258-4f86-96bf-11eb26490e55"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.954993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run" (OuterVolumeSpecName: "var-run") pod "e11b8318-5258-4f86-96bf-11eb26490e55" (UID: "e11b8318-5258-4f86-96bf-11eb26490e55"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.955109 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e11b8318-5258-4f86-96bf-11eb26490e55" (UID: "e11b8318-5258-4f86-96bf-11eb26490e55"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.955721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-scripts" (OuterVolumeSpecName: "scripts") pod "e11b8318-5258-4f86-96bf-11eb26490e55" (UID: "e11b8318-5258-4f86-96bf-11eb26490e55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.958096 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.960014 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lc8rx"] Nov 22 07:40:10 crc kubenswrapper[4858]: I1122 07:40:10.964111 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11b8318-5258-4f86-96bf-11eb26490e55-kube-api-access-6g98g" (OuterVolumeSpecName: "kube-api-access-6g98g") pod "e11b8318-5258-4f86-96bf-11eb26490e55" (UID: "e11b8318-5258-4f86-96bf-11eb26490e55"). InnerVolumeSpecName "kube-api-access-6g98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3afe9df-46ed-4387-a69d-ca42dc63b199-operator-scripts\") pod \"placement-db-create-lc8rx\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn92w\" (UniqueName: \"kubernetes.io/projected/d3afe9df-46ed-4387-a69d-ca42dc63b199-kube-api-access-gn92w\") pod \"placement-db-create-lc8rx\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059181 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059192 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e11b8318-5258-4f86-96bf-11eb26490e55-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059202 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g98g\" (UniqueName: \"kubernetes.io/projected/e11b8318-5258-4f86-96bf-11eb26490e55-kube-api-access-6g98g\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059213 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059221 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.059230 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e11b8318-5258-4f86-96bf-11eb26490e55-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.065050 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9450-account-create-vlh78"] Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.066768 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.070146 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.083957 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9450-account-create-vlh78"] Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.149702 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rm92c-config-q7gww"] Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.157730 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rm92c-config-q7gww"] Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.160818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxstg\" (UniqueName: \"kubernetes.io/projected/0eb292c6-b1bc-4c62-a3a5-753730fcd643-kube-api-access-nxstg\") pod \"placement-9450-account-create-vlh78\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.160903 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3afe9df-46ed-4387-a69d-ca42dc63b199-operator-scripts\") pod \"placement-db-create-lc8rx\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.160950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn92w\" (UniqueName: \"kubernetes.io/projected/d3afe9df-46ed-4387-a69d-ca42dc63b199-kube-api-access-gn92w\") pod \"placement-db-create-lc8rx\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.161135 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb292c6-b1bc-4c62-a3a5-753730fcd643-operator-scripts\") pod \"placement-9450-account-create-vlh78\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.162296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3afe9df-46ed-4387-a69d-ca42dc63b199-operator-scripts\") pod \"placement-db-create-lc8rx\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.193488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn92w\" (UniqueName: \"kubernetes.io/projected/d3afe9df-46ed-4387-a69d-ca42dc63b199-kube-api-access-gn92w\") pod \"placement-db-create-lc8rx\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.262786 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb292c6-b1bc-4c62-a3a5-753730fcd643-operator-scripts\") pod \"placement-9450-account-create-vlh78\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.263204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxstg\" (UniqueName: \"kubernetes.io/projected/0eb292c6-b1bc-4c62-a3a5-753730fcd643-kube-api-access-nxstg\") pod \"placement-9450-account-create-vlh78\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.264170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb292c6-b1bc-4c62-a3a5-753730fcd643-operator-scripts\") pod \"placement-9450-account-create-vlh78\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.272159 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rm92c-config-qx8g6"] Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.273402 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.282541 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.284778 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.286991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxstg\" (UniqueName: \"kubernetes.io/projected/0eb292c6-b1bc-4c62-a3a5-753730fcd643-kube-api-access-nxstg\") pod \"placement-9450-account-create-vlh78\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.317087 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rm92c-config-qx8g6"] Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.366285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhw8g\" (UniqueName: \"kubernetes.io/projected/35cb67ab-46a2-4b5e-9ca0-f7442581a175-kube-api-access-fhw8g\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.366359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-scripts\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.366433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run-ovn\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.366464 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-log-ovn\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.366627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.366666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-additional-scripts\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:11 crc kubenswrapper[4858]: I1122 07:40:11.400101 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.441437 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0612ab1112e1706388cc0188bcb44fae6a0e8b9c3c219322cf16a11e2c9251d4" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.441527 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-q7gww" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.468603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.468662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-additional-scripts\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.468771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhw8g\" (UniqueName: \"kubernetes.io/projected/35cb67ab-46a2-4b5e-9ca0-f7442581a175-kube-api-access-fhw8g\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.468794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-scripts\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.468832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run-ovn\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.468850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-log-ovn\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.469037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.469064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-log-ovn\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.469836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run-ovn\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.469859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-additional-scripts\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.471271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-scripts\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.489711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhw8g\" (UniqueName: \"kubernetes.io/projected/35cb67ab-46a2-4b5e-9ca0-f7442581a175-kube-api-access-fhw8g\") pod \"ovn-controller-rm92c-config-qx8g6\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.541261 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:40:14 crc kubenswrapper[4858]: E1122 07:40:11.541764 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:40:14 crc kubenswrapper[4858]: W1122 07:40:11.551048 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd99df9d_2d5a_4997_b876_4573a931ee39.slice/crio-40759a37fb6a1141787896d7082f491094fd87ec75edf0f1faf6486b49f53ed1 WatchSource:0}: Error finding container 40759a37fb6a1141787896d7082f491094fd87ec75edf0f1faf6486b49f53ed1: Status 404 returned error can't find the container with id 40759a37fb6a1141787896d7082f491094fd87ec75edf0f1faf6486b49f53ed1 Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.563922 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11b8318-5258-4f86-96bf-11eb26490e55" path="/var/lib/kubelet/pods/e11b8318-5258-4f86-96bf-11eb26490e55/volumes" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.578028 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xgxrd"] Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.592118 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:11.702534 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cf57-account-create-cz7dj"] Nov 22 07:40:14 crc kubenswrapper[4858]: W1122 07:40:11.727385 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod725a427f_782d_4d51_95f9_24ff18fe1591.slice/crio-9b8b162a5ec75b96038090ecbb3d5574deba6f7d5a04a72b66f74152663b75bd WatchSource:0}: Error finding container 9b8b162a5ec75b96038090ecbb3d5574deba6f7d5a04a72b66f74152663b75bd: Status 404 returned error can't find the container with id 9b8b162a5ec75b96038090ecbb3d5574deba6f7d5a04a72b66f74152663b75bd Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:12.330774 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:12.452721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xgxrd" event={"ID":"bd99df9d-2d5a-4997-b876-4573a931ee39","Type":"ContainerStarted","Data":"40759a37fb6a1141787896d7082f491094fd87ec75edf0f1faf6486b49f53ed1"} Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:12.454684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cf57-account-create-cz7dj" event={"ID":"725a427f-782d-4d51-95f9-24ff18fe1591","Type":"ContainerStarted","Data":"9b8b162a5ec75b96038090ecbb3d5574deba6f7d5a04a72b66f74152663b75bd"} Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:13.643693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:13.701785 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-xcn8s"] Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:13.702083 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerName="dnsmasq-dns" containerID="cri-o://52a540c4b96bacbe908d815fc8802c36c28cfacfb5f3579970dbd564a0717de9" gracePeriod=10 Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:14.475188 4858 generic.go:334] "Generic (PLEG): container finished" podID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerID="52a540c4b96bacbe908d815fc8802c36c28cfacfb5f3579970dbd564a0717de9" exitCode=0 Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:14.475248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" event={"ID":"d868e587-35fd-4fde-9db6-19f7dfa055e3","Type":"ContainerDied","Data":"52a540c4b96bacbe908d815fc8802c36c28cfacfb5f3579970dbd564a0717de9"} Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:14.485916 4858 generic.go:334] "Generic (PLEG): container finished" podID="bd99df9d-2d5a-4997-b876-4573a931ee39" containerID="9a14ba256974b4e536774cfc054ad26464c059adcc22a7bb717825b06118eb03" exitCode=0 Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:14.486019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xgxrd" event={"ID":"bd99df9d-2d5a-4997-b876-4573a931ee39","Type":"ContainerDied","Data":"9a14ba256974b4e536774cfc054ad26464c059adcc22a7bb717825b06118eb03"} Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:14.492629 4858 generic.go:334] "Generic (PLEG): container finished" podID="725a427f-782d-4d51-95f9-24ff18fe1591" containerID="6f22965b9d245713fae3ab6b040b415aa1ede7b9a460b7408dad4321ecc55b82" exitCode=0 Nov 22 07:40:14 crc kubenswrapper[4858]: I1122 07:40:14.492691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cf57-account-create-cz7dj" event={"ID":"725a427f-782d-4d51-95f9-24ff18fe1591","Type":"ContainerDied","Data":"6f22965b9d245713fae3ab6b040b415aa1ede7b9a460b7408dad4321ecc55b82"} Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.304027 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.352103 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-dns-svc\") pod \"d868e587-35fd-4fde-9db6-19f7dfa055e3\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.352198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-config\") pod \"d868e587-35fd-4fde-9db6-19f7dfa055e3\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.352534 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-sb\") pod \"d868e587-35fd-4fde-9db6-19f7dfa055e3\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.352596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x822c\" (UniqueName: \"kubernetes.io/projected/d868e587-35fd-4fde-9db6-19f7dfa055e3-kube-api-access-x822c\") pod \"d868e587-35fd-4fde-9db6-19f7dfa055e3\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.352649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-nb\") pod \"d868e587-35fd-4fde-9db6-19f7dfa055e3\" (UID: \"d868e587-35fd-4fde-9db6-19f7dfa055e3\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.361983 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d868e587-35fd-4fde-9db6-19f7dfa055e3-kube-api-access-x822c" (OuterVolumeSpecName: "kube-api-access-x822c") pod "d868e587-35fd-4fde-9db6-19f7dfa055e3" (UID: "d868e587-35fd-4fde-9db6-19f7dfa055e3"). InnerVolumeSpecName "kube-api-access-x822c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.405260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d868e587-35fd-4fde-9db6-19f7dfa055e3" (UID: "d868e587-35fd-4fde-9db6-19f7dfa055e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.409296 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-config" (OuterVolumeSpecName: "config") pod "d868e587-35fd-4fde-9db6-19f7dfa055e3" (UID: "d868e587-35fd-4fde-9db6-19f7dfa055e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.410037 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d868e587-35fd-4fde-9db6-19f7dfa055e3" (UID: "d868e587-35fd-4fde-9db6-19f7dfa055e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.416817 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d868e587-35fd-4fde-9db6-19f7dfa055e3" (UID: "d868e587-35fd-4fde-9db6-19f7dfa055e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.455569 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.455607 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.455624 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x822c\" (UniqueName: \"kubernetes.io/projected/d868e587-35fd-4fde-9db6-19f7dfa055e3-kube-api-access-x822c\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.455636 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.455645 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d868e587-35fd-4fde-9db6-19f7dfa055e3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.505434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" event={"ID":"d868e587-35fd-4fde-9db6-19f7dfa055e3","Type":"ContainerDied","Data":"77a454aad598682c434ab0deffdd40914c1b7ce85e2ea36024c043912bd00807"} Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.505553 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-xcn8s" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.505811 4858 scope.go:117] "RemoveContainer" containerID="52a540c4b96bacbe908d815fc8802c36c28cfacfb5f3579970dbd564a0717de9" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.525908 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lc8rx"] Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.559310 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-xcn8s"] Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.563912 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-xcn8s"] Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.565390 4858 scope.go:117] "RemoveContainer" containerID="4d3ab0076e383f6fcf1b6e2a3cd1c529bcd6b4a985753e32a4c15af30ebda81f" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.610721 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9450-account-create-vlh78"] Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.620066 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rm92c-config-qx8g6"] Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.872728 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.952618 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.964968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66j2\" (UniqueName: \"kubernetes.io/projected/bd99df9d-2d5a-4997-b876-4573a931ee39-kube-api-access-p66j2\") pod \"bd99df9d-2d5a-4997-b876-4573a931ee39\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.965187 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd99df9d-2d5a-4997-b876-4573a931ee39-operator-scripts\") pod \"bd99df9d-2d5a-4997-b876-4573a931ee39\" (UID: \"bd99df9d-2d5a-4997-b876-4573a931ee39\") " Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.965791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd99df9d-2d5a-4997-b876-4573a931ee39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd99df9d-2d5a-4997-b876-4573a931ee39" (UID: "bd99df9d-2d5a-4997-b876-4573a931ee39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.967283 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd99df9d-2d5a-4997-b876-4573a931ee39-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:15 crc kubenswrapper[4858]: I1122 07:40:15.973074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd99df9d-2d5a-4997-b876-4573a931ee39-kube-api-access-p66j2" (OuterVolumeSpecName: "kube-api-access-p66j2") pod "bd99df9d-2d5a-4997-b876-4573a931ee39" (UID: "bd99df9d-2d5a-4997-b876-4573a931ee39"). InnerVolumeSpecName "kube-api-access-p66j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.067982 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725a427f-782d-4d51-95f9-24ff18fe1591-operator-scripts\") pod \"725a427f-782d-4d51-95f9-24ff18fe1591\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.068305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlnnt\" (UniqueName: \"kubernetes.io/projected/725a427f-782d-4d51-95f9-24ff18fe1591-kube-api-access-nlnnt\") pod \"725a427f-782d-4d51-95f9-24ff18fe1591\" (UID: \"725a427f-782d-4d51-95f9-24ff18fe1591\") " Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.068641 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725a427f-782d-4d51-95f9-24ff18fe1591-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "725a427f-782d-4d51-95f9-24ff18fe1591" (UID: "725a427f-782d-4d51-95f9-24ff18fe1591"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.068883 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p66j2\" (UniqueName: \"kubernetes.io/projected/bd99df9d-2d5a-4997-b876-4573a931ee39-kube-api-access-p66j2\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.068919 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725a427f-782d-4d51-95f9-24ff18fe1591-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.070666 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gmvcq"] Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.071158 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd99df9d-2d5a-4997-b876-4573a931ee39" containerName="mariadb-database-create" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071179 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd99df9d-2d5a-4997-b876-4573a931ee39" containerName="mariadb-database-create" Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.071479 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerName="init" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071489 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerName="init" Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.071516 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerName="dnsmasq-dns" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071525 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerName="dnsmasq-dns" Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.071548 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725a427f-782d-4d51-95f9-24ff18fe1591" containerName="mariadb-account-create" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071555 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="725a427f-782d-4d51-95f9-24ff18fe1591" containerName="mariadb-account-create" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071550 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725a427f-782d-4d51-95f9-24ff18fe1591-kube-api-access-nlnnt" (OuterVolumeSpecName: "kube-api-access-nlnnt") pod "725a427f-782d-4d51-95f9-24ff18fe1591" (UID: "725a427f-782d-4d51-95f9-24ff18fe1591"). InnerVolumeSpecName "kube-api-access-nlnnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071728 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="725a427f-782d-4d51-95f9-24ff18fe1591" containerName="mariadb-account-create" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071783 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd99df9d-2d5a-4997-b876-4573a931ee39" containerName="mariadb-database-create" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.071821 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" containerName="dnsmasq-dns" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.072626 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.083848 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gmvcq"] Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.170777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-operator-scripts\") pod \"glance-db-create-gmvcq\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.170904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgbc\" (UniqueName: \"kubernetes.io/projected/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-kube-api-access-4dgbc\") pod \"glance-db-create-gmvcq\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.170963 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlnnt\" (UniqueName: \"kubernetes.io/projected/725a427f-782d-4d51-95f9-24ff18fe1591-kube-api-access-nlnnt\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.181928 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2343-account-create-tlktz"] Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.183409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.185619 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.200675 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2343-account-create-tlktz"] Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.272841 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-operator-scripts\") pod \"glance-db-create-gmvcq\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.272974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485r2\" (UniqueName: \"kubernetes.io/projected/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-kube-api-access-485r2\") pod \"glance-2343-account-create-tlktz\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.273141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dgbc\" (UniqueName: \"kubernetes.io/projected/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-kube-api-access-4dgbc\") pod \"glance-db-create-gmvcq\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.273253 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-operator-scripts\") pod \"glance-2343-account-create-tlktz\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.274146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-operator-scripts\") pod \"glance-db-create-gmvcq\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.291004 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dgbc\" (UniqueName: \"kubernetes.io/projected/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-kube-api-access-4dgbc\") pod \"glance-db-create-gmvcq\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.329747 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.375505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485r2\" (UniqueName: \"kubernetes.io/projected/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-kube-api-access-485r2\") pod \"glance-2343-account-create-tlktz\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.375833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-operator-scripts\") pod \"glance-2343-account-create-tlktz\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.379033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-operator-scripts\") pod \"glance-2343-account-create-tlktz\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.398860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485r2\" (UniqueName: \"kubernetes.io/projected/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-kube-api-access-485r2\") pod \"glance-2343-account-create-tlktz\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.404370 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.478306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.478545 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.478582 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:40:16 crc kubenswrapper[4858]: E1122 07:40:16.478647 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift podName:df9f2ec4-f57a-47a7-94a2-17549e2ed641 nodeName:}" failed. No retries permitted until 2025-11-22 07:40:48.478627678 +0000 UTC m=+1810.320050684 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift") pod "swift-storage-0" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641") : configmap "swift-ring-files" not found Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.518060 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.519935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.523499 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.524824 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3afe9df-46ed-4387-a69d-ca42dc63b199" containerID="541f8565f916d4fed150459498ecc51ccc608d1c9bc0ed12d6ab3ee39555c0bc" exitCode=0 Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.524889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lc8rx" event={"ID":"d3afe9df-46ed-4387-a69d-ca42dc63b199","Type":"ContainerDied","Data":"541f8565f916d4fed150459498ecc51ccc608d1c9bc0ed12d6ab3ee39555c0bc"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.524919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lc8rx" event={"ID":"d3afe9df-46ed-4387-a69d-ca42dc63b199","Type":"ContainerStarted","Data":"bc583fe642263f8ea35c6994697a90de15fee3d8668a6102d155a1d336354fc7"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.525394 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-gdgdw" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.525507 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.525720 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.532612 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cf57-account-create-cz7dj" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.533095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cf57-account-create-cz7dj" event={"ID":"725a427f-782d-4d51-95f9-24ff18fe1591","Type":"ContainerDied","Data":"9b8b162a5ec75b96038090ecbb3d5574deba6f7d5a04a72b66f74152663b75bd"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.533147 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8b162a5ec75b96038090ecbb3d5574deba6f7d5a04a72b66f74152663b75bd" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.544345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.550722 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.560651 4858 generic.go:334] "Generic (PLEG): container finished" podID="35cb67ab-46a2-4b5e-9ca0-f7442581a175" containerID="8f97cf8d77768e552938342ad815f326771e5cbc6898755842e6ef708fc38e40" exitCode=0 Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.560777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c-config-qx8g6" event={"ID":"35cb67ab-46a2-4b5e-9ca0-f7442581a175","Type":"ContainerDied","Data":"8f97cf8d77768e552938342ad815f326771e5cbc6898755842e6ef708fc38e40"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.560842 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c-config-qx8g6" event={"ID":"35cb67ab-46a2-4b5e-9ca0-f7442581a175","Type":"ContainerStarted","Data":"bda1527f5ee1ffa026901d32e60c86684f5956f95ec90f7d4e0cdc4c13735f67"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.563667 4858 generic.go:334] "Generic (PLEG): container finished" podID="0eb292c6-b1bc-4c62-a3a5-753730fcd643" containerID="92b0cb42168f7f97d3cfb66cdb73d033c460b257408d844abc6d96bfd9bb9a4d" exitCode=0 Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.563749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9450-account-create-vlh78" event={"ID":"0eb292c6-b1bc-4c62-a3a5-753730fcd643","Type":"ContainerDied","Data":"92b0cb42168f7f97d3cfb66cdb73d033c460b257408d844abc6d96bfd9bb9a4d"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.563782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9450-account-create-vlh78" event={"ID":"0eb292c6-b1bc-4c62-a3a5-753730fcd643","Type":"ContainerStarted","Data":"ada4457e24c714d231804d785429567b6308f11c619030b065afd21482c5603a"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.572437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xgxrd" event={"ID":"bd99df9d-2d5a-4997-b876-4573a931ee39","Type":"ContainerDied","Data":"40759a37fb6a1141787896d7082f491094fd87ec75edf0f1faf6486b49f53ed1"} Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.572488 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40759a37fb6a1141787896d7082f491094fd87ec75edf0f1faf6486b49f53ed1" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.572574 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xgxrd" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.584270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.584352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-scripts\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.584445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-config\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.584548 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw68r\" (UniqueName: \"kubernetes.io/projected/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-kube-api-access-nw68r\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.584988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.585038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.585182 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689393 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw68r\" (UniqueName: \"kubernetes.io/projected/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-kube-api-access-nw68r\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689756 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689880 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-scripts\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.689965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-config\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.690964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-config\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.691323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.692078 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-scripts\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.696793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.697215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.696796 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.710290 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw68r\" (UniqueName: \"kubernetes.io/projected/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-kube-api-access-nw68r\") pod \"ovn-northd-0\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " pod="openstack/ovn-northd-0" Nov 22 07:40:16 crc kubenswrapper[4858]: I1122 07:40:16.896011 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.045894 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gmvcq"] Nov 22 07:40:17 crc kubenswrapper[4858]: W1122 07:40:17.051121 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f26956e_77a1_4cef_8fe2_1c5e398f7b96.slice/crio-351ce82e49d45f720c4fe744d2bd577c1447a74817379dbc50ee2670201c942d WatchSource:0}: Error finding container 351ce82e49d45f720c4fe744d2bd577c1447a74817379dbc50ee2670201c942d: Status 404 returned error can't find the container with id 351ce82e49d45f720c4fe744d2bd577c1447a74817379dbc50ee2670201c942d Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.138236 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2343-account-create-tlktz"] Nov 22 07:40:17 crc kubenswrapper[4858]: W1122 07:40:17.163304 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5221eb_b8ee_4271_a2fe_627f0d08d2cf.slice/crio-0872ece7edaa2e4bbb9299236b546fa06b2545d7746ab507785b4cbfdb9e2c1f WatchSource:0}: Error finding container 0872ece7edaa2e4bbb9299236b546fa06b2545d7746ab507785b4cbfdb9e2c1f: Status 404 returned error can't find the container with id 0872ece7edaa2e4bbb9299236b546fa06b2545d7746ab507785b4cbfdb9e2c1f Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.381725 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:40:17 crc kubenswrapper[4858]: W1122 07:40:17.386021 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bb2d6c0_fce8_4356_a3c5_5b1cd6c23bb2.slice/crio-64b210afffbd52131ede91ace21e78d7c8c4a66ed4a668b6837b48294c99b069 WatchSource:0}: Error finding container 64b210afffbd52131ede91ace21e78d7c8c4a66ed4a668b6837b48294c99b069: Status 404 returned error can't find the container with id 64b210afffbd52131ede91ace21e78d7c8c4a66ed4a668b6837b48294c99b069 Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.546939 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d868e587-35fd-4fde-9db6-19f7dfa055e3" path="/var/lib/kubelet/pods/d868e587-35fd-4fde-9db6-19f7dfa055e3/volumes" Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.585450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2","Type":"ContainerStarted","Data":"64b210afffbd52131ede91ace21e78d7c8c4a66ed4a668b6837b48294c99b069"} Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.587586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2343-account-create-tlktz" event={"ID":"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf","Type":"ContainerStarted","Data":"81249119f01504e2f75136ccaa20d76ad79562ce6c4c032f420d15e3ac22cfbc"} Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.587625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2343-account-create-tlktz" event={"ID":"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf","Type":"ContainerStarted","Data":"0872ece7edaa2e4bbb9299236b546fa06b2545d7746ab507785b4cbfdb9e2c1f"} Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.590679 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmvcq" event={"ID":"2f26956e-77a1-4cef-8fe2-1c5e398f7b96","Type":"ContainerStarted","Data":"887a79386e9217424aa800cb35eef23c59e3cf8a8bb1f2591a6932ebabb407b5"} Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.590988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmvcq" event={"ID":"2f26956e-77a1-4cef-8fe2-1c5e398f7b96","Type":"ContainerStarted","Data":"351ce82e49d45f720c4fe744d2bd577c1447a74817379dbc50ee2670201c942d"} Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.627168 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2343-account-create-tlktz" podStartSLOduration=1.627145422 podStartE2EDuration="1.627145422s" podCreationTimestamp="2025-11-22 07:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:17.611434729 +0000 UTC m=+1779.452857765" watchObservedRunningTime="2025-11-22 07:40:17.627145422 +0000 UTC m=+1779.468568428" Nov 22 07:40:17 crc kubenswrapper[4858]: I1122 07:40:17.627672 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gmvcq" podStartSLOduration=1.627666349 podStartE2EDuration="1.627666349s" podCreationTimestamp="2025-11-22 07:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:17.625378246 +0000 UTC m=+1779.466801262" watchObservedRunningTime="2025-11-22 07:40:17.627666349 +0000 UTC m=+1779.469089355" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.208996 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.216733 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.224284 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn92w\" (UniqueName: \"kubernetes.io/projected/d3afe9df-46ed-4387-a69d-ca42dc63b199-kube-api-access-gn92w\") pod \"d3afe9df-46ed-4387-a69d-ca42dc63b199\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhw8g\" (UniqueName: \"kubernetes.io/projected/35cb67ab-46a2-4b5e-9ca0-f7442581a175-kube-api-access-fhw8g\") pod \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323731 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb292c6-b1bc-4c62-a3a5-753730fcd643-operator-scripts\") pod \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323773 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-log-ovn\") pod \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-scripts\") pod \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323901 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run\") pod \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-additional-scripts\") pod \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxstg\" (UniqueName: \"kubernetes.io/projected/0eb292c6-b1bc-4c62-a3a5-753730fcd643-kube-api-access-nxstg\") pod \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\" (UID: \"0eb292c6-b1bc-4c62-a3a5-753730fcd643\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "35cb67ab-46a2-4b5e-9ca0-f7442581a175" (UID: "35cb67ab-46a2-4b5e-9ca0-f7442581a175"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.323998 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run-ovn\") pod \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\" (UID: \"35cb67ab-46a2-4b5e-9ca0-f7442581a175\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.324052 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3afe9df-46ed-4387-a69d-ca42dc63b199-operator-scripts\") pod \"d3afe9df-46ed-4387-a69d-ca42dc63b199\" (UID: \"d3afe9df-46ed-4387-a69d-ca42dc63b199\") " Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.324003 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run" (OuterVolumeSpecName: "var-run") pod "35cb67ab-46a2-4b5e-9ca0-f7442581a175" (UID: "35cb67ab-46a2-4b5e-9ca0-f7442581a175"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.324435 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.324456 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.324533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "35cb67ab-46a2-4b5e-9ca0-f7442581a175" (UID: "35cb67ab-46a2-4b5e-9ca0-f7442581a175"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.324964 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3afe9df-46ed-4387-a69d-ca42dc63b199-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3afe9df-46ed-4387-a69d-ca42dc63b199" (UID: "d3afe9df-46ed-4387-a69d-ca42dc63b199"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.325088 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "35cb67ab-46a2-4b5e-9ca0-f7442581a175" (UID: "35cb67ab-46a2-4b5e-9ca0-f7442581a175"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.325105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb292c6-b1bc-4c62-a3a5-753730fcd643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eb292c6-b1bc-4c62-a3a5-753730fcd643" (UID: "0eb292c6-b1bc-4c62-a3a5-753730fcd643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.325501 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-scripts" (OuterVolumeSpecName: "scripts") pod "35cb67ab-46a2-4b5e-9ca0-f7442581a175" (UID: "35cb67ab-46a2-4b5e-9ca0-f7442581a175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.330216 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb292c6-b1bc-4c62-a3a5-753730fcd643-kube-api-access-nxstg" (OuterVolumeSpecName: "kube-api-access-nxstg") pod "0eb292c6-b1bc-4c62-a3a5-753730fcd643" (UID: "0eb292c6-b1bc-4c62-a3a5-753730fcd643"). InnerVolumeSpecName "kube-api-access-nxstg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.334657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35cb67ab-46a2-4b5e-9ca0-f7442581a175-kube-api-access-fhw8g" (OuterVolumeSpecName: "kube-api-access-fhw8g") pod "35cb67ab-46a2-4b5e-9ca0-f7442581a175" (UID: "35cb67ab-46a2-4b5e-9ca0-f7442581a175"). InnerVolumeSpecName "kube-api-access-fhw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.348554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3afe9df-46ed-4387-a69d-ca42dc63b199-kube-api-access-gn92w" (OuterVolumeSpecName: "kube-api-access-gn92w") pod "d3afe9df-46ed-4387-a69d-ca42dc63b199" (UID: "d3afe9df-46ed-4387-a69d-ca42dc63b199"). InnerVolumeSpecName "kube-api-access-gn92w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426067 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3afe9df-46ed-4387-a69d-ca42dc63b199-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426474 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn92w\" (UniqueName: \"kubernetes.io/projected/d3afe9df-46ed-4387-a69d-ca42dc63b199-kube-api-access-gn92w\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426489 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhw8g\" (UniqueName: \"kubernetes.io/projected/35cb67ab-46a2-4b5e-9ca0-f7442581a175-kube-api-access-fhw8g\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426502 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb292c6-b1bc-4c62-a3a5-753730fcd643-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426512 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426521 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/35cb67ab-46a2-4b5e-9ca0-f7442581a175-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426530 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxstg\" (UniqueName: \"kubernetes.io/projected/0eb292c6-b1bc-4c62-a3a5-753730fcd643-kube-api-access-nxstg\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.426539 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/35cb67ab-46a2-4b5e-9ca0-f7442581a175-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.602419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c-config-qx8g6" event={"ID":"35cb67ab-46a2-4b5e-9ca0-f7442581a175","Type":"ContainerDied","Data":"bda1527f5ee1ffa026901d32e60c86684f5956f95ec90f7d4e0cdc4c13735f67"} Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.602476 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda1527f5ee1ffa026901d32e60c86684f5956f95ec90f7d4e0cdc4c13735f67" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.602535 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c-config-qx8g6" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.604702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9450-account-create-vlh78" event={"ID":"0eb292c6-b1bc-4c62-a3a5-753730fcd643","Type":"ContainerDied","Data":"ada4457e24c714d231804d785429567b6308f11c619030b065afd21482c5603a"} Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.604739 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada4457e24c714d231804d785429567b6308f11c619030b065afd21482c5603a" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.604739 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9450-account-create-vlh78" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.613512 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f26956e-77a1-4cef-8fe2-1c5e398f7b96" containerID="887a79386e9217424aa800cb35eef23c59e3cf8a8bb1f2591a6932ebabb407b5" exitCode=0 Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.613601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmvcq" event={"ID":"2f26956e-77a1-4cef-8fe2-1c5e398f7b96","Type":"ContainerDied","Data":"887a79386e9217424aa800cb35eef23c59e3cf8a8bb1f2591a6932ebabb407b5"} Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.618234 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lc8rx" Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.619746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lc8rx" event={"ID":"d3afe9df-46ed-4387-a69d-ca42dc63b199","Type":"ContainerDied","Data":"bc583fe642263f8ea35c6994697a90de15fee3d8668a6102d155a1d336354fc7"} Nov 22 07:40:18 crc kubenswrapper[4858]: I1122 07:40:18.619802 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc583fe642263f8ea35c6994697a90de15fee3d8668a6102d155a1d336354fc7" Nov 22 07:40:19 crc kubenswrapper[4858]: I1122 07:40:19.362602 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rm92c-config-qx8g6"] Nov 22 07:40:19 crc kubenswrapper[4858]: I1122 07:40:19.370026 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rm92c-config-qx8g6"] Nov 22 07:40:19 crc kubenswrapper[4858]: I1122 07:40:19.553054 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35cb67ab-46a2-4b5e-9ca0-f7442581a175" path="/var/lib/kubelet/pods/35cb67ab-46a2-4b5e-9ca0-f7442581a175/volumes" Nov 22 07:40:19 crc kubenswrapper[4858]: I1122 07:40:19.629620 4858 generic.go:334] "Generic (PLEG): container finished" podID="ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" containerID="81249119f01504e2f75136ccaa20d76ad79562ce6c4c032f420d15e3ac22cfbc" exitCode=0 Nov 22 07:40:19 crc kubenswrapper[4858]: I1122 07:40:19.629758 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2343-account-create-tlktz" event={"ID":"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf","Type":"ContainerDied","Data":"81249119f01504e2f75136ccaa20d76ad79562ce6c4c032f420d15e3ac22cfbc"} Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.052469 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.166149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dgbc\" (UniqueName: \"kubernetes.io/projected/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-kube-api-access-4dgbc\") pod \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.166547 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-operator-scripts\") pod \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\" (UID: \"2f26956e-77a1-4cef-8fe2-1c5e398f7b96\") " Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.167482 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f26956e-77a1-4cef-8fe2-1c5e398f7b96" (UID: "2f26956e-77a1-4cef-8fe2-1c5e398f7b96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.176711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-kube-api-access-4dgbc" (OuterVolumeSpecName: "kube-api-access-4dgbc") pod "2f26956e-77a1-4cef-8fe2-1c5e398f7b96" (UID: "2f26956e-77a1-4cef-8fe2-1c5e398f7b96"). InnerVolumeSpecName "kube-api-access-4dgbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.270187 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dgbc\" (UniqueName: \"kubernetes.io/projected/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-kube-api-access-4dgbc\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.270229 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f26956e-77a1-4cef-8fe2-1c5e398f7b96-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.642238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2","Type":"ContainerStarted","Data":"39fde520f058b73ce73c8fd11a8bfa24e055a38211b694c36194ba6867caba1b"} Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.643453 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.643549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2","Type":"ContainerStarted","Data":"6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09"} Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.644529 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmvcq" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.644514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmvcq" event={"ID":"2f26956e-77a1-4cef-8fe2-1c5e398f7b96","Type":"ContainerDied","Data":"351ce82e49d45f720c4fe744d2bd577c1447a74817379dbc50ee2670201c942d"} Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.648589 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="351ce82e49d45f720c4fe744d2bd577c1447a74817379dbc50ee2670201c942d" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.688120 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.128356215 podStartE2EDuration="4.688097293s" podCreationTimestamp="2025-11-22 07:40:16 +0000 UTC" firstStartedPulling="2025-11-22 07:40:17.389702424 +0000 UTC m=+1779.231125430" lastFinishedPulling="2025-11-22 07:40:19.949443502 +0000 UTC m=+1781.790866508" observedRunningTime="2025-11-22 07:40:20.671711018 +0000 UTC m=+1782.513134024" watchObservedRunningTime="2025-11-22 07:40:20.688097293 +0000 UTC m=+1782.529520299" Nov 22 07:40:20 crc kubenswrapper[4858]: I1122 07:40:20.985141 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.083444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-operator-scripts\") pod \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.083590 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485r2\" (UniqueName: \"kubernetes.io/projected/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-kube-api-access-485r2\") pod \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\" (UID: \"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf\") " Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.083986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" (UID: "ff5221eb-b8ee-4271-a2fe-627f0d08d2cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.088965 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-kube-api-access-485r2" (OuterVolumeSpecName: "kube-api-access-485r2") pod "ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" (UID: "ff5221eb-b8ee-4271-a2fe-627f0d08d2cf"). InnerVolumeSpecName "kube-api-access-485r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.186107 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.186191 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485r2\" (UniqueName: \"kubernetes.io/projected/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf-kube-api-access-485r2\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.654554 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2343-account-create-tlktz" event={"ID":"ff5221eb-b8ee-4271-a2fe-627f0d08d2cf","Type":"ContainerDied","Data":"0872ece7edaa2e4bbb9299236b546fa06b2545d7746ab507785b4cbfdb9e2c1f"} Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.654919 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0872ece7edaa2e4bbb9299236b546fa06b2545d7746ab507785b4cbfdb9e2c1f" Nov 22 07:40:21 crc kubenswrapper[4858]: I1122 07:40:21.654623 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2343-account-create-tlktz" Nov 22 07:40:22 crc kubenswrapper[4858]: I1122 07:40:22.535881 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:40:22 crc kubenswrapper[4858]: E1122 07:40:22.538697 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:40:22 crc kubenswrapper[4858]: I1122 07:40:22.664176 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-czmj7" event={"ID":"98e3f90c-3676-41ee-ab2d-f0dca9196a02","Type":"ContainerStarted","Data":"67f96849e31d122e4179b6efb15731fb368f96111e60368078186e3ff4dfdd2c"} Nov 22 07:40:22 crc kubenswrapper[4858]: I1122 07:40:22.688206 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-czmj7" podStartSLOduration=2.271960767 podStartE2EDuration="37.688187637s" podCreationTimestamp="2025-11-22 07:39:45 +0000 UTC" firstStartedPulling="2025-11-22 07:39:45.978514618 +0000 UTC m=+1747.819937624" lastFinishedPulling="2025-11-22 07:40:21.394741488 +0000 UTC m=+1783.236164494" observedRunningTime="2025-11-22 07:40:22.686191322 +0000 UTC m=+1784.527614328" watchObservedRunningTime="2025-11-22 07:40:22.688187637 +0000 UTC m=+1784.529610643" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.453610 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-djszx"] Nov 22 07:40:26 crc kubenswrapper[4858]: E1122 07:40:26.454573 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3afe9df-46ed-4387-a69d-ca42dc63b199" containerName="mariadb-database-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454591 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3afe9df-46ed-4387-a69d-ca42dc63b199" containerName="mariadb-database-create" Nov 22 07:40:26 crc kubenswrapper[4858]: E1122 07:40:26.454607 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" containerName="mariadb-account-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454613 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" containerName="mariadb-account-create" Nov 22 07:40:26 crc kubenswrapper[4858]: E1122 07:40:26.454625 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb292c6-b1bc-4c62-a3a5-753730fcd643" containerName="mariadb-account-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454631 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb292c6-b1bc-4c62-a3a5-753730fcd643" containerName="mariadb-account-create" Nov 22 07:40:26 crc kubenswrapper[4858]: E1122 07:40:26.454649 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cb67ab-46a2-4b5e-9ca0-f7442581a175" containerName="ovn-config" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454655 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cb67ab-46a2-4b5e-9ca0-f7442581a175" containerName="ovn-config" Nov 22 07:40:26 crc kubenswrapper[4858]: E1122 07:40:26.454671 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f26956e-77a1-4cef-8fe2-1c5e398f7b96" containerName="mariadb-database-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454677 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f26956e-77a1-4cef-8fe2-1c5e398f7b96" containerName="mariadb-database-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454834 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" containerName="mariadb-account-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454864 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb292c6-b1bc-4c62-a3a5-753730fcd643" containerName="mariadb-account-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454876 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3afe9df-46ed-4387-a69d-ca42dc63b199" containerName="mariadb-database-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454893 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cb67ab-46a2-4b5e-9ca0-f7442581a175" containerName="ovn-config" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.454904 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f26956e-77a1-4cef-8fe2-1c5e398f7b96" containerName="mariadb-database-create" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.455520 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.458776 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pk5hd" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.459486 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.474923 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-djszx"] Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.585070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-combined-ca-bundle\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.585174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-db-sync-config-data\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.585207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmmk4\" (UniqueName: \"kubernetes.io/projected/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-kube-api-access-kmmk4\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.585232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-config-data\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.687475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-combined-ca-bundle\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.687565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-db-sync-config-data\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.687598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmmk4\" (UniqueName: \"kubernetes.io/projected/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-kube-api-access-kmmk4\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.687614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-config-data\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.695870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-config-data\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.696433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-db-sync-config-data\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.697480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-combined-ca-bundle\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.724835 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmmk4\" (UniqueName: \"kubernetes.io/projected/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-kube-api-access-kmmk4\") pod \"glance-db-sync-djszx\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " pod="openstack/glance-db-sync-djszx" Nov 22 07:40:26 crc kubenswrapper[4858]: I1122 07:40:26.778689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-djszx" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.299612 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.531234 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-djszx"] Nov 22 07:40:27 crc kubenswrapper[4858]: W1122 07:40:27.537704 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb4885ab_de3a_4ccf_bfd4_a702a3b9d647.slice/crio-151d5e5038943d31959d2d9be06e4667f0c29455494beedef03b1b2bd41e70f5 WatchSource:0}: Error finding container 151d5e5038943d31959d2d9be06e4667f0c29455494beedef03b1b2bd41e70f5: Status 404 returned error can't find the container with id 151d5e5038943d31959d2d9be06e4667f0c29455494beedef03b1b2bd41e70f5 Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.696547 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.701248 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-s8p7b"] Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.702567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.708877 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0446-account-create-d8spp"] Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.710002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.717006 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.721498 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-s8p7b"] Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.728923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-djszx" event={"ID":"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647","Type":"ContainerStarted","Data":"151d5e5038943d31959d2d9be06e4667f0c29455494beedef03b1b2bd41e70f5"} Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.735267 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0446-account-create-d8spp"] Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.806745 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1dd36f7-e035-455e-92a8-9bf84fdb8829-operator-scripts\") pod \"barbican-0446-account-create-d8spp\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.807157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98st9\" (UniqueName: \"kubernetes.io/projected/07ea9610-8d23-4826-af0f-3b82ee456527-kube-api-access-98st9\") pod \"cinder-db-create-s8p7b\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.807217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ea9610-8d23-4826-af0f-3b82ee456527-operator-scripts\") pod \"cinder-db-create-s8p7b\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.807361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrdd\" (UniqueName: \"kubernetes.io/projected/a1dd36f7-e035-455e-92a8-9bf84fdb8829-kube-api-access-lvrdd\") pod \"barbican-0446-account-create-d8spp\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.834555 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xg4xq"] Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.835762 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.847775 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xg4xq"] Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.909871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1dd36f7-e035-455e-92a8-9bf84fdb8829-operator-scripts\") pod \"barbican-0446-account-create-d8spp\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.909968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98st9\" (UniqueName: \"kubernetes.io/projected/07ea9610-8d23-4826-af0f-3b82ee456527-kube-api-access-98st9\") pod \"cinder-db-create-s8p7b\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.910044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ea9610-8d23-4826-af0f-3b82ee456527-operator-scripts\") pod \"cinder-db-create-s8p7b\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.910117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrdd\" (UniqueName: \"kubernetes.io/projected/a1dd36f7-e035-455e-92a8-9bf84fdb8829-kube-api-access-lvrdd\") pod \"barbican-0446-account-create-d8spp\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.910213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b5c6a5d-7db6-4083-97b4-6868dc190b66-operator-scripts\") pod \"barbican-db-create-xg4xq\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.910361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5cfh\" (UniqueName: \"kubernetes.io/projected/7b5c6a5d-7db6-4083-97b4-6868dc190b66-kube-api-access-m5cfh\") pod \"barbican-db-create-xg4xq\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.910720 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1dd36f7-e035-455e-92a8-9bf84fdb8829-operator-scripts\") pod \"barbican-0446-account-create-d8spp\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.911332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ea9610-8d23-4826-af0f-3b82ee456527-operator-scripts\") pod \"cinder-db-create-s8p7b\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.941463 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98st9\" (UniqueName: \"kubernetes.io/projected/07ea9610-8d23-4826-af0f-3b82ee456527-kube-api-access-98st9\") pod \"cinder-db-create-s8p7b\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:27 crc kubenswrapper[4858]: I1122 07:40:27.943736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrdd\" (UniqueName: \"kubernetes.io/projected/a1dd36f7-e035-455e-92a8-9bf84fdb8829-kube-api-access-lvrdd\") pod \"barbican-0446-account-create-d8spp\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.007267 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1521-account-create-dnfjd"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.008455 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.011937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5cfh\" (UniqueName: \"kubernetes.io/projected/7b5c6a5d-7db6-4083-97b4-6868dc190b66-kube-api-access-m5cfh\") pod \"barbican-db-create-xg4xq\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.012173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b5c6a5d-7db6-4083-97b4-6868dc190b66-operator-scripts\") pod \"barbican-db-create-xg4xq\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.013686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b5c6a5d-7db6-4083-97b4-6868dc190b66-operator-scripts\") pod \"barbican-db-create-xg4xq\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.018533 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.032954 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1521-account-create-dnfjd"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.033854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.037861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5cfh\" (UniqueName: \"kubernetes.io/projected/7b5c6a5d-7db6-4083-97b4-6868dc190b66-kube-api-access-m5cfh\") pod \"barbican-db-create-xg4xq\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.041708 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.113785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-operator-scripts\") pod \"cinder-1521-account-create-dnfjd\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.113871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlg4n\" (UniqueName: \"kubernetes.io/projected/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-kube-api-access-xlg4n\") pod \"cinder-1521-account-create-dnfjd\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.116216 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-586g4"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.117664 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.148150 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-586g4"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.158630 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.190888 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sxstk"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.192499 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.196101 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.197265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.197539 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.205048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8f6m6" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.250283 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-config-data\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.250631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnjh\" (UniqueName: \"kubernetes.io/projected/6936b381-bdd5-459a-b440-a1b6ae1aba52-kube-api-access-lpnjh\") pod \"neutron-db-create-586g4\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.250797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-combined-ca-bundle\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.250970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-operator-scripts\") pod \"cinder-1521-account-create-dnfjd\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.251058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/43b2326f-6238-4686-8e42-5bd33c074357-kube-api-access-cq4wd\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.251137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6936b381-bdd5-459a-b440-a1b6ae1aba52-operator-scripts\") pod \"neutron-db-create-586g4\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.251219 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlg4n\" (UniqueName: \"kubernetes.io/projected/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-kube-api-access-xlg4n\") pod \"cinder-1521-account-create-dnfjd\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.253007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-operator-scripts\") pod \"cinder-1521-account-create-dnfjd\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.266514 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sxstk"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.302190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlg4n\" (UniqueName: \"kubernetes.io/projected/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-kube-api-access-xlg4n\") pod \"cinder-1521-account-create-dnfjd\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.352978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-combined-ca-bundle\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.354309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/43b2326f-6238-4686-8e42-5bd33c074357-kube-api-access-cq4wd\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.354374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6936b381-bdd5-459a-b440-a1b6ae1aba52-operator-scripts\") pod \"neutron-db-create-586g4\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.354621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-config-data\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.354663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnjh\" (UniqueName: \"kubernetes.io/projected/6936b381-bdd5-459a-b440-a1b6ae1aba52-kube-api-access-lpnjh\") pod \"neutron-db-create-586g4\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.358108 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6936b381-bdd5-459a-b440-a1b6ae1aba52-operator-scripts\") pod \"neutron-db-create-586g4\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.358423 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-combined-ca-bundle\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.362291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-config-data\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.381952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnjh\" (UniqueName: \"kubernetes.io/projected/6936b381-bdd5-459a-b440-a1b6ae1aba52-kube-api-access-lpnjh\") pod \"neutron-db-create-586g4\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.397512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/43b2326f-6238-4686-8e42-5bd33c074357-kube-api-access-cq4wd\") pod \"keystone-db-sync-sxstk\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.435526 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-eea0-account-create-cvdw5"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.437100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.445143 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.446134 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-eea0-account-create-cvdw5"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.478606 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.494407 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-586g4" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.558519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwpd\" (UniqueName: \"kubernetes.io/projected/22c72c65-31d4-4eed-bf9d-9358c14642ec-kube-api-access-mpwpd\") pod \"neutron-eea0-account-create-cvdw5\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.558720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c72c65-31d4-4eed-bf9d-9358c14642ec-operator-scripts\") pod \"neutron-eea0-account-create-cvdw5\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.562699 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxstk" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.660453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwpd\" (UniqueName: \"kubernetes.io/projected/22c72c65-31d4-4eed-bf9d-9358c14642ec-kube-api-access-mpwpd\") pod \"neutron-eea0-account-create-cvdw5\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.660625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c72c65-31d4-4eed-bf9d-9358c14642ec-operator-scripts\") pod \"neutron-eea0-account-create-cvdw5\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.661977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c72c65-31d4-4eed-bf9d-9358c14642ec-operator-scripts\") pod \"neutron-eea0-account-create-cvdw5\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.692309 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwpd\" (UniqueName: \"kubernetes.io/projected/22c72c65-31d4-4eed-bf9d-9358c14642ec-kube-api-access-mpwpd\") pod \"neutron-eea0-account-create-cvdw5\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.759291 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-s8p7b"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.763777 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.859023 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xg4xq"] Nov 22 07:40:28 crc kubenswrapper[4858]: I1122 07:40:28.877854 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0446-account-create-d8spp"] Nov 22 07:40:28 crc kubenswrapper[4858]: W1122 07:40:28.893917 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1dd36f7_e035_455e_92a8_9bf84fdb8829.slice/crio-bce8ea7e0a84541333c620c054bb03093f41f585f20621a9918a167d509e9e41 WatchSource:0}: Error finding container bce8ea7e0a84541333c620c054bb03093f41f585f20621a9918a167d509e9e41: Status 404 returned error can't find the container with id bce8ea7e0a84541333c620c054bb03093f41f585f20621a9918a167d509e9e41 Nov 22 07:40:28 crc kubenswrapper[4858]: W1122 07:40:28.909265 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b5c6a5d_7db6_4083_97b4_6868dc190b66.slice/crio-3a10bee681ae2fb5be5f0217134df3eaa15c3de53633be4d9c770a491600402e WatchSource:0}: Error finding container 3a10bee681ae2fb5be5f0217134df3eaa15c3de53633be4d9c770a491600402e: Status 404 returned error can't find the container with id 3a10bee681ae2fb5be5f0217134df3eaa15c3de53633be4d9c770a491600402e Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.379446 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sxstk"] Nov 22 07:40:29 crc kubenswrapper[4858]: W1122 07:40:29.403749 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43b2326f_6238_4686_8e42_5bd33c074357.slice/crio-43eeec396a23286bb8251cc8f4d9a2557b8ab41a8b82d96096f423d26214043d WatchSource:0}: Error finding container 43eeec396a23286bb8251cc8f4d9a2557b8ab41a8b82d96096f423d26214043d: Status 404 returned error can't find the container with id 43eeec396a23286bb8251cc8f4d9a2557b8ab41a8b82d96096f423d26214043d Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.500958 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1521-account-create-dnfjd"] Nov 22 07:40:29 crc kubenswrapper[4858]: W1122 07:40:29.505393 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2adb039_7bf0_4b67_b6e5_28c0e7692ccb.slice/crio-ebe7a4e1c1b88b742da5daf633f54ccd89b26d547aa214959cee3940ea2344a8 WatchSource:0}: Error finding container ebe7a4e1c1b88b742da5daf633f54ccd89b26d547aa214959cee3940ea2344a8: Status 404 returned error can't find the container with id ebe7a4e1c1b88b742da5daf633f54ccd89b26d547aa214959cee3940ea2344a8 Nov 22 07:40:29 crc kubenswrapper[4858]: W1122 07:40:29.522090 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22c72c65_31d4_4eed_bf9d_9358c14642ec.slice/crio-cf0aee71a945a7b41205191213487d3d5b611021f4beb55c859e39f28d0759fe WatchSource:0}: Error finding container cf0aee71a945a7b41205191213487d3d5b611021f4beb55c859e39f28d0759fe: Status 404 returned error can't find the container with id cf0aee71a945a7b41205191213487d3d5b611021f4beb55c859e39f28d0759fe Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.527731 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-eea0-account-create-cvdw5"] Nov 22 07:40:29 crc kubenswrapper[4858]: W1122 07:40:29.544413 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6936b381_bdd5_459a_b440_a1b6ae1aba52.slice/crio-ced1e09a430217e1f1cb8e6f555379de7138d088c1b000e035de4c88ac4528b3 WatchSource:0}: Error finding container ced1e09a430217e1f1cb8e6f555379de7138d088c1b000e035de4c88ac4528b3: Status 404 returned error can't find the container with id ced1e09a430217e1f1cb8e6f555379de7138d088c1b000e035de4c88ac4528b3 Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.559122 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-586g4"] Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.772608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxstk" event={"ID":"43b2326f-6238-4686-8e42-5bd33c074357","Type":"ContainerStarted","Data":"43eeec396a23286bb8251cc8f4d9a2557b8ab41a8b82d96096f423d26214043d"} Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.782880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0446-account-create-d8spp" event={"ID":"a1dd36f7-e035-455e-92a8-9bf84fdb8829","Type":"ContainerStarted","Data":"bce8ea7e0a84541333c620c054bb03093f41f585f20621a9918a167d509e9e41"} Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.791649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1521-account-create-dnfjd" event={"ID":"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb","Type":"ContainerStarted","Data":"ebe7a4e1c1b88b742da5daf633f54ccd89b26d547aa214959cee3940ea2344a8"} Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.795474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s8p7b" event={"ID":"07ea9610-8d23-4826-af0f-3b82ee456527","Type":"ContainerStarted","Data":"351e36399968f44795bdf01c0a3336593a4e2d78c58f2276ea4a37a84816f938"} Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.797303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eea0-account-create-cvdw5" event={"ID":"22c72c65-31d4-4eed-bf9d-9358c14642ec","Type":"ContainerStarted","Data":"cf0aee71a945a7b41205191213487d3d5b611021f4beb55c859e39f28d0759fe"} Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.798593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-586g4" event={"ID":"6936b381-bdd5-459a-b440-a1b6ae1aba52","Type":"ContainerStarted","Data":"ced1e09a430217e1f1cb8e6f555379de7138d088c1b000e035de4c88ac4528b3"} Nov 22 07:40:29 crc kubenswrapper[4858]: I1122 07:40:29.799696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xg4xq" event={"ID":"7b5c6a5d-7db6-4083-97b4-6868dc190b66","Type":"ContainerStarted","Data":"3a10bee681ae2fb5be5f0217134df3eaa15c3de53633be4d9c770a491600402e"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.828820 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s8p7b" event={"ID":"07ea9610-8d23-4826-af0f-3b82ee456527","Type":"ContainerStarted","Data":"4268d28e0d7669552cd784717affef98db42d1f02dd3a45710ff4af9661f0dec"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.832696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eea0-account-create-cvdw5" event={"ID":"22c72c65-31d4-4eed-bf9d-9358c14642ec","Type":"ContainerStarted","Data":"e45ad93bdcfb7d3ad64ecf2597dcbec1ee1f0bb2ff160f3cd3b89c57ee80f12d"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.835663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-586g4" event={"ID":"6936b381-bdd5-459a-b440-a1b6ae1aba52","Type":"ContainerStarted","Data":"251ca398e74de7732f2cc51f902e5158e1046e03a38fc8c768ee48563f9f231a"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.838641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xg4xq" event={"ID":"7b5c6a5d-7db6-4083-97b4-6868dc190b66","Type":"ContainerStarted","Data":"fb36d8654ae3d6515feade85100caacf66c69e3e46dec05899254d5dc28b6d08"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.841895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0446-account-create-d8spp" event={"ID":"a1dd36f7-e035-455e-92a8-9bf84fdb8829","Type":"ContainerStarted","Data":"cf8b537af4b8c32c28f7db79176ae05e3ffafea339b9f896e59649f11eba428c"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.862747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1521-account-create-dnfjd" event={"ID":"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb","Type":"ContainerStarted","Data":"a1cf1344d8fa4a530a9c19077eaef6e03fd43ff1247eeb398c4df12950f2881c"} Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.871550 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-s8p7b" podStartSLOduration=3.8715296869999998 podStartE2EDuration="3.871529687s" podCreationTimestamp="2025-11-22 07:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:30.847080373 +0000 UTC m=+1792.688503379" watchObservedRunningTime="2025-11-22 07:40:30.871529687 +0000 UTC m=+1792.712952693" Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.883168 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-586g4" podStartSLOduration=2.883141428 podStartE2EDuration="2.883141428s" podCreationTimestamp="2025-11-22 07:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:30.877295361 +0000 UTC m=+1792.718718387" watchObservedRunningTime="2025-11-22 07:40:30.883141428 +0000 UTC m=+1792.724564454" Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.914858 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-eea0-account-create-cvdw5" podStartSLOduration=2.914831004 podStartE2EDuration="2.914831004s" podCreationTimestamp="2025-11-22 07:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:30.896732865 +0000 UTC m=+1792.738155871" watchObservedRunningTime="2025-11-22 07:40:30.914831004 +0000 UTC m=+1792.756254010" Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.932879 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-xg4xq" podStartSLOduration=3.932857792 podStartE2EDuration="3.932857792s" podCreationTimestamp="2025-11-22 07:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:30.921414195 +0000 UTC m=+1792.762837211" watchObservedRunningTime="2025-11-22 07:40:30.932857792 +0000 UTC m=+1792.774280798" Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.947513 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0446-account-create-d8spp" podStartSLOduration=3.94749004 podStartE2EDuration="3.94749004s" podCreationTimestamp="2025-11-22 07:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:30.941120547 +0000 UTC m=+1792.782543553" watchObservedRunningTime="2025-11-22 07:40:30.94749004 +0000 UTC m=+1792.788913056" Nov 22 07:40:30 crc kubenswrapper[4858]: I1122 07:40:30.972548 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-1521-account-create-dnfjd" podStartSLOduration=3.972527673 podStartE2EDuration="3.972527673s" podCreationTimestamp="2025-11-22 07:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:30.966424358 +0000 UTC m=+1792.807847394" watchObservedRunningTime="2025-11-22 07:40:30.972527673 +0000 UTC m=+1792.813950679" Nov 22 07:40:31 crc kubenswrapper[4858]: I1122 07:40:31.873080 4858 generic.go:334] "Generic (PLEG): container finished" podID="6936b381-bdd5-459a-b440-a1b6ae1aba52" containerID="251ca398e74de7732f2cc51f902e5158e1046e03a38fc8c768ee48563f9f231a" exitCode=0 Nov 22 07:40:31 crc kubenswrapper[4858]: I1122 07:40:31.873156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-586g4" event={"ID":"6936b381-bdd5-459a-b440-a1b6ae1aba52","Type":"ContainerDied","Data":"251ca398e74de7732f2cc51f902e5158e1046e03a38fc8c768ee48563f9f231a"} Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.176706 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.891045 4858 generic.go:334] "Generic (PLEG): container finished" podID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" containerID="67f96849e31d122e4179b6efb15731fb368f96111e60368078186e3ff4dfdd2c" exitCode=0 Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.891118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-czmj7" event={"ID":"98e3f90c-3676-41ee-ab2d-f0dca9196a02","Type":"ContainerDied","Data":"67f96849e31d122e4179b6efb15731fb368f96111e60368078186e3ff4dfdd2c"} Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.893249 4858 generic.go:334] "Generic (PLEG): container finished" podID="a1dd36f7-e035-455e-92a8-9bf84fdb8829" containerID="cf8b537af4b8c32c28f7db79176ae05e3ffafea339b9f896e59649f11eba428c" exitCode=0 Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.893300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0446-account-create-d8spp" event={"ID":"a1dd36f7-e035-455e-92a8-9bf84fdb8829","Type":"ContainerDied","Data":"cf8b537af4b8c32c28f7db79176ae05e3ffafea339b9f896e59649f11eba428c"} Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.899686 4858 generic.go:334] "Generic (PLEG): container finished" podID="e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" containerID="a1cf1344d8fa4a530a9c19077eaef6e03fd43ff1247eeb398c4df12950f2881c" exitCode=0 Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.899881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1521-account-create-dnfjd" event={"ID":"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb","Type":"ContainerDied","Data":"a1cf1344d8fa4a530a9c19077eaef6e03fd43ff1247eeb398c4df12950f2881c"} Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.901705 4858 generic.go:334] "Generic (PLEG): container finished" podID="07ea9610-8d23-4826-af0f-3b82ee456527" containerID="4268d28e0d7669552cd784717affef98db42d1f02dd3a45710ff4af9661f0dec" exitCode=0 Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.901788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s8p7b" event={"ID":"07ea9610-8d23-4826-af0f-3b82ee456527","Type":"ContainerDied","Data":"4268d28e0d7669552cd784717affef98db42d1f02dd3a45710ff4af9661f0dec"} Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.905704 4858 generic.go:334] "Generic (PLEG): container finished" podID="22c72c65-31d4-4eed-bf9d-9358c14642ec" containerID="e45ad93bdcfb7d3ad64ecf2597dcbec1ee1f0bb2ff160f3cd3b89c57ee80f12d" exitCode=0 Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.908460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eea0-account-create-cvdw5" event={"ID":"22c72c65-31d4-4eed-bf9d-9358c14642ec","Type":"ContainerDied","Data":"e45ad93bdcfb7d3ad64ecf2597dcbec1ee1f0bb2ff160f3cd3b89c57ee80f12d"} Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.912926 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b5c6a5d-7db6-4083-97b4-6868dc190b66" containerID="fb36d8654ae3d6515feade85100caacf66c69e3e46dec05899254d5dc28b6d08" exitCode=0 Nov 22 07:40:32 crc kubenswrapper[4858]: I1122 07:40:32.913160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xg4xq" event={"ID":"7b5c6a5d-7db6-4083-97b4-6868dc190b66","Type":"ContainerDied","Data":"fb36d8654ae3d6515feade85100caacf66c69e3e46dec05899254d5dc28b6d08"} Nov 22 07:40:33 crc kubenswrapper[4858]: I1122 07:40:33.535805 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:40:33 crc kubenswrapper[4858]: E1122 07:40:33.536423 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.514707 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.536227 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.543952 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.551641 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5cfh\" (UniqueName: \"kubernetes.io/projected/7b5c6a5d-7db6-4083-97b4-6868dc190b66-kube-api-access-m5cfh\") pod \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602211 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-combined-ca-bundle\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b5c6a5d-7db6-4083-97b4-6868dc190b66-operator-scripts\") pod \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\" (UID: \"7b5c6a5d-7db6-4083-97b4-6868dc190b66\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-ring-data-devices\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ea9610-8d23-4826-af0f-3b82ee456527-operator-scripts\") pod \"07ea9610-8d23-4826-af0f-3b82ee456527\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-scripts\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602467 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkb56\" (UniqueName: \"kubernetes.io/projected/98e3f90c-3676-41ee-ab2d-f0dca9196a02-kube-api-access-pkb56\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98e3f90c-3676-41ee-ab2d-f0dca9196a02-etc-swift\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-dispersionconf\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602634 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c72c65-31d4-4eed-bf9d-9358c14642ec-operator-scripts\") pod \"22c72c65-31d4-4eed-bf9d-9358c14642ec\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602661 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-swiftconf\") pod \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\" (UID: \"98e3f90c-3676-41ee-ab2d-f0dca9196a02\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602699 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98st9\" (UniqueName: \"kubernetes.io/projected/07ea9610-8d23-4826-af0f-3b82ee456527-kube-api-access-98st9\") pod \"07ea9610-8d23-4826-af0f-3b82ee456527\" (UID: \"07ea9610-8d23-4826-af0f-3b82ee456527\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.602728 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwpd\" (UniqueName: \"kubernetes.io/projected/22c72c65-31d4-4eed-bf9d-9358c14642ec-kube-api-access-mpwpd\") pod \"22c72c65-31d4-4eed-bf9d-9358c14642ec\" (UID: \"22c72c65-31d4-4eed-bf9d-9358c14642ec\") " Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.603526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c72c65-31d4-4eed-bf9d-9358c14642ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22c72c65-31d4-4eed-bf9d-9358c14642ec" (UID: "22c72c65-31d4-4eed-bf9d-9358c14642ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.603624 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ea9610-8d23-4826-af0f-3b82ee456527-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07ea9610-8d23-4826-af0f-3b82ee456527" (UID: "07ea9610-8d23-4826-af0f-3b82ee456527"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.603650 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.603680 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5c6a5d-7db6-4083-97b4-6868dc190b66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b5c6a5d-7db6-4083-97b4-6868dc190b66" (UID: "7b5c6a5d-7db6-4083-97b4-6868dc190b66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.604114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98e3f90c-3676-41ee-ab2d-f0dca9196a02-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.609589 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c72c65-31d4-4eed-bf9d-9358c14642ec-kube-api-access-mpwpd" (OuterVolumeSpecName: "kube-api-access-mpwpd") pod "22c72c65-31d4-4eed-bf9d-9358c14642ec" (UID: "22c72c65-31d4-4eed-bf9d-9358c14642ec"). InnerVolumeSpecName "kube-api-access-mpwpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.609613 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5c6a5d-7db6-4083-97b4-6868dc190b66-kube-api-access-m5cfh" (OuterVolumeSpecName: "kube-api-access-m5cfh") pod "7b5c6a5d-7db6-4083-97b4-6868dc190b66" (UID: "7b5c6a5d-7db6-4083-97b4-6868dc190b66"). InnerVolumeSpecName "kube-api-access-m5cfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.610736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e3f90c-3676-41ee-ab2d-f0dca9196a02-kube-api-access-pkb56" (OuterVolumeSpecName: "kube-api-access-pkb56") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "kube-api-access-pkb56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.613491 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ea9610-8d23-4826-af0f-3b82ee456527-kube-api-access-98st9" (OuterVolumeSpecName: "kube-api-access-98st9") pod "07ea9610-8d23-4826-af0f-3b82ee456527" (UID: "07ea9610-8d23-4826-af0f-3b82ee456527"). InnerVolumeSpecName "kube-api-access-98st9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.620041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.633887 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-scripts" (OuterVolumeSpecName: "scripts") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.646554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.655365 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98e3f90c-3676-41ee-ab2d-f0dca9196a02" (UID: "98e3f90c-3676-41ee-ab2d-f0dca9196a02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709277 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709334 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c72c65-31d4-4eed-bf9d-9358c14642ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709349 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709362 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98st9\" (UniqueName: \"kubernetes.io/projected/07ea9610-8d23-4826-af0f-3b82ee456527-kube-api-access-98st9\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709378 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwpd\" (UniqueName: \"kubernetes.io/projected/22c72c65-31d4-4eed-bf9d-9358c14642ec-kube-api-access-mpwpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709387 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5cfh\" (UniqueName: \"kubernetes.io/projected/7b5c6a5d-7db6-4083-97b4-6868dc190b66-kube-api-access-m5cfh\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709396 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98e3f90c-3676-41ee-ab2d-f0dca9196a02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709405 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b5c6a5d-7db6-4083-97b4-6868dc190b66-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709417 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709429 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07ea9610-8d23-4826-af0f-3b82ee456527-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709442 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98e3f90c-3676-41ee-ab2d-f0dca9196a02-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709454 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkb56\" (UniqueName: \"kubernetes.io/projected/98e3f90c-3676-41ee-ab2d-f0dca9196a02-kube-api-access-pkb56\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.709465 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/98e3f90c-3676-41ee-ab2d-f0dca9196a02-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.992691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s8p7b" event={"ID":"07ea9610-8d23-4826-af0f-3b82ee456527","Type":"ContainerDied","Data":"351e36399968f44795bdf01c0a3336593a4e2d78c58f2276ea4a37a84816f938"} Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.992753 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="351e36399968f44795bdf01c0a3336593a4e2d78c58f2276ea4a37a84816f938" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.992794 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s8p7b" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.994788 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-eea0-account-create-cvdw5" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.994964 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-eea0-account-create-cvdw5" event={"ID":"22c72c65-31d4-4eed-bf9d-9358c14642ec","Type":"ContainerDied","Data":"cf0aee71a945a7b41205191213487d3d5b611021f4beb55c859e39f28d0759fe"} Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.995043 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf0aee71a945a7b41205191213487d3d5b611021f4beb55c859e39f28d0759fe" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.996867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xg4xq" event={"ID":"7b5c6a5d-7db6-4083-97b4-6868dc190b66","Type":"ContainerDied","Data":"3a10bee681ae2fb5be5f0217134df3eaa15c3de53633be4d9c770a491600402e"} Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.996895 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a10bee681ae2fb5be5f0217134df3eaa15c3de53633be4d9c770a491600402e" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.996878 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xg4xq" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.998867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-czmj7" event={"ID":"98e3f90c-3676-41ee-ab2d-f0dca9196a02","Type":"ContainerDied","Data":"9c4680d6a6b703f93f1cd4385df87f8b2f7ae3bc953047a565582a09c03d5a89"} Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.998888 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c4680d6a6b703f93f1cd4385df87f8b2f7ae3bc953047a565582a09c03d5a89" Nov 22 07:40:40 crc kubenswrapper[4858]: I1122 07:40:40.998998 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-czmj7" Nov 22 07:40:44 crc kubenswrapper[4858]: I1122 07:40:44.535584 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:40:44 crc kubenswrapper[4858]: E1122 07:40:44.536298 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:40:48 crc kubenswrapper[4858]: I1122 07:40:48.577685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:40:48 crc kubenswrapper[4858]: I1122 07:40:48.588268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"swift-storage-0\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " pod="openstack/swift-storage-0" Nov 22 07:40:48 crc kubenswrapper[4858]: I1122 07:40:48.713644 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:40:53 crc kubenswrapper[4858]: E1122 07:40:53.344096 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29" Nov 22 07:40:53 crc kubenswrapper[4858]: E1122 07:40:53.344658 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmmk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-djszx_openstack(bb4885ab-de3a-4ccf-bfd4-a702a3b9d647): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:40:53 crc kubenswrapper[4858]: E1122 07:40:53.345862 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-djszx" podUID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.623417 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.639655 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-586g4" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.664428 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1dd36f7-e035-455e-92a8-9bf84fdb8829-operator-scripts\") pod \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.664628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvrdd\" (UniqueName: \"kubernetes.io/projected/a1dd36f7-e035-455e-92a8-9bf84fdb8829-kube-api-access-lvrdd\") pod \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\" (UID: \"a1dd36f7-e035-455e-92a8-9bf84fdb8829\") " Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.668081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1dd36f7-e035-455e-92a8-9bf84fdb8829-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1dd36f7-e035-455e-92a8-9bf84fdb8829" (UID: "a1dd36f7-e035-455e-92a8-9bf84fdb8829"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.677784 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1dd36f7-e035-455e-92a8-9bf84fdb8829-kube-api-access-lvrdd" (OuterVolumeSpecName: "kube-api-access-lvrdd") pod "a1dd36f7-e035-455e-92a8-9bf84fdb8829" (UID: "a1dd36f7-e035-455e-92a8-9bf84fdb8829"). InnerVolumeSpecName "kube-api-access-lvrdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.679457 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.766049 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-operator-scripts\") pod \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.766116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlg4n\" (UniqueName: \"kubernetes.io/projected/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-kube-api-access-xlg4n\") pod \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\" (UID: \"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb\") " Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.766141 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6936b381-bdd5-459a-b440-a1b6ae1aba52-operator-scripts\") pod \"6936b381-bdd5-459a-b440-a1b6ae1aba52\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.766295 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpnjh\" (UniqueName: \"kubernetes.io/projected/6936b381-bdd5-459a-b440-a1b6ae1aba52-kube-api-access-lpnjh\") pod \"6936b381-bdd5-459a-b440-a1b6ae1aba52\" (UID: \"6936b381-bdd5-459a-b440-a1b6ae1aba52\") " Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.766662 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvrdd\" (UniqueName: \"kubernetes.io/projected/a1dd36f7-e035-455e-92a8-9bf84fdb8829-kube-api-access-lvrdd\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.766688 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1dd36f7-e035-455e-92a8-9bf84fdb8829-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.767869 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" (UID: "e2adb039-7bf0-4b67-b6e5-28c0e7692ccb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.767888 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6936b381-bdd5-459a-b440-a1b6ae1aba52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6936b381-bdd5-459a-b440-a1b6ae1aba52" (UID: "6936b381-bdd5-459a-b440-a1b6ae1aba52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.770255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6936b381-bdd5-459a-b440-a1b6ae1aba52-kube-api-access-lpnjh" (OuterVolumeSpecName: "kube-api-access-lpnjh") pod "6936b381-bdd5-459a-b440-a1b6ae1aba52" (UID: "6936b381-bdd5-459a-b440-a1b6ae1aba52"). InnerVolumeSpecName "kube-api-access-lpnjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.770719 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-kube-api-access-xlg4n" (OuterVolumeSpecName: "kube-api-access-xlg4n") pod "e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" (UID: "e2adb039-7bf0-4b67-b6e5-28c0e7692ccb"). InnerVolumeSpecName "kube-api-access-xlg4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.867993 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlg4n\" (UniqueName: \"kubernetes.io/projected/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-kube-api-access-xlg4n\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.868500 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6936b381-bdd5-459a-b440-a1b6ae1aba52-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.868567 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpnjh\" (UniqueName: \"kubernetes.io/projected/6936b381-bdd5-459a-b440-a1b6ae1aba52-kube-api-access-lpnjh\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:53 crc kubenswrapper[4858]: I1122 07:40:53.868680 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.003925 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.114702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0446-account-create-d8spp" event={"ID":"a1dd36f7-e035-455e-92a8-9bf84fdb8829","Type":"ContainerDied","Data":"bce8ea7e0a84541333c620c054bb03093f41f585f20621a9918a167d509e9e41"} Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.115070 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce8ea7e0a84541333c620c054bb03093f41f585f20621a9918a167d509e9e41" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.115009 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0446-account-create-d8spp" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.117081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1521-account-create-dnfjd" event={"ID":"e2adb039-7bf0-4b67-b6e5-28c0e7692ccb","Type":"ContainerDied","Data":"ebe7a4e1c1b88b742da5daf633f54ccd89b26d547aa214959cee3940ea2344a8"} Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.117128 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebe7a4e1c1b88b742da5daf633f54ccd89b26d547aa214959cee3940ea2344a8" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.117191 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1521-account-create-dnfjd" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.120694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"ae7edd80f218a450bd8bb2175eabf9ca34cccf65815ee7663b10d1e1e7b63945"} Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.123330 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-586g4" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.123303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-586g4" event={"ID":"6936b381-bdd5-459a-b440-a1b6ae1aba52","Type":"ContainerDied","Data":"ced1e09a430217e1f1cb8e6f555379de7138d088c1b000e035de4c88ac4528b3"} Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.123740 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced1e09a430217e1f1cb8e6f555379de7138d088c1b000e035de4c88ac4528b3" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.125347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxstk" event={"ID":"43b2326f-6238-4686-8e42-5bd33c074357","Type":"ContainerStarted","Data":"e861834850f977b602f18ed8d17b254529dd837b73735c0bbac78e6b2b23be6f"} Nov 22 07:40:54 crc kubenswrapper[4858]: E1122 07:40:54.127404 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29\\\"\"" pod="openstack/glance-db-sync-djszx" podUID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" Nov 22 07:40:54 crc kubenswrapper[4858]: I1122 07:40:54.187811 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sxstk" podStartSLOduration=2.178492838 podStartE2EDuration="26.187783556s" podCreationTimestamp="2025-11-22 07:40:28 +0000 UTC" firstStartedPulling="2025-11-22 07:40:29.407727478 +0000 UTC m=+1791.249150484" lastFinishedPulling="2025-11-22 07:40:53.417018196 +0000 UTC m=+1815.258441202" observedRunningTime="2025-11-22 07:40:54.177655412 +0000 UTC m=+1816.019078438" watchObservedRunningTime="2025-11-22 07:40:54.187783556 +0000 UTC m=+1816.029206562" Nov 22 07:40:55 crc kubenswrapper[4858]: I1122 07:40:55.536263 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:40:55 crc kubenswrapper[4858]: E1122 07:40:55.536945 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:40:57 crc kubenswrapper[4858]: I1122 07:40:57.155744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70"} Nov 22 07:40:57 crc kubenswrapper[4858]: I1122 07:40:57.156029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57"} Nov 22 07:40:57 crc kubenswrapper[4858]: I1122 07:40:57.156041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b"} Nov 22 07:40:58 crc kubenswrapper[4858]: I1122 07:40:58.171226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c"} Nov 22 07:41:00 crc kubenswrapper[4858]: I1122 07:41:00.194396 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7"} Nov 22 07:41:00 crc kubenswrapper[4858]: I1122 07:41:00.194815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f"} Nov 22 07:41:01 crc kubenswrapper[4858]: I1122 07:41:01.208544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4"} Nov 22 07:41:01 crc kubenswrapper[4858]: I1122 07:41:01.208857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97"} Nov 22 07:41:06 crc kubenswrapper[4858]: I1122 07:41:06.267622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52"} Nov 22 07:41:08 crc kubenswrapper[4858]: I1122 07:41:08.298090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600"} Nov 22 07:41:08 crc kubenswrapper[4858]: I1122 07:41:08.537168 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:41:08 crc kubenswrapper[4858]: E1122 07:41:08.537622 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:41:09 crc kubenswrapper[4858]: I1122 07:41:09.323634 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3"} Nov 22 07:41:09 crc kubenswrapper[4858]: I1122 07:41:09.324200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098"} Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.365979 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab"} Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.366578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307"} Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.366595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerStarted","Data":"ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66"} Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.412535 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=75.958646262 podStartE2EDuration="1m27.412498736s" podCreationTimestamp="2025-11-22 07:39:43 +0000 UTC" firstStartedPulling="2025-11-22 07:40:54.01844474 +0000 UTC m=+1815.859867756" lastFinishedPulling="2025-11-22 07:41:05.472297224 +0000 UTC m=+1827.313720230" observedRunningTime="2025-11-22 07:41:10.404112388 +0000 UTC m=+1832.245535414" watchObservedRunningTime="2025-11-22 07:41:10.412498736 +0000 UTC m=+1832.253921742" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818339 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56766df65f-zzccz"] Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818814 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c72c65-31d4-4eed-bf9d-9358c14642ec" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818834 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c72c65-31d4-4eed-bf9d-9358c14642ec" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818853 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818862 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818873 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" containerName="swift-ring-rebalance" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818881 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" containerName="swift-ring-rebalance" Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818896 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dd36f7-e035-455e-92a8-9bf84fdb8829" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818903 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dd36f7-e035-455e-92a8-9bf84fdb8829" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818919 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ea9610-8d23-4826-af0f-3b82ee456527" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818927 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ea9610-8d23-4826-af0f-3b82ee456527" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818944 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6936b381-bdd5-459a-b440-a1b6ae1aba52" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818952 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6936b381-bdd5-459a-b440-a1b6ae1aba52" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: E1122 07:41:10.818964 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5c6a5d-7db6-4083-97b4-6868dc190b66" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.818974 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5c6a5d-7db6-4083-97b4-6868dc190b66" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819166 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5c6a5d-7db6-4083-97b4-6868dc190b66" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819187 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819204 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6936b381-bdd5-459a-b440-a1b6ae1aba52" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819224 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1dd36f7-e035-455e-92a8-9bf84fdb8829" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819233 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" containerName="swift-ring-rebalance" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819245 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c72c65-31d4-4eed-bf9d-9358c14642ec" containerName="mariadb-account-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.819257 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ea9610-8d23-4826-af0f-3b82ee456527" containerName="mariadb-database-create" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.824765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.836017 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 22 07:41:10 crc kubenswrapper[4858]: I1122 07:41:10.858797 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-zzccz"] Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.008514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.008605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.008691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4sm\" (UniqueName: \"kubernetes.io/projected/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-kube-api-access-hc4sm\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.008723 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-svc\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.008761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-config\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.008857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.111061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.111190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.111269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc4sm\" (UniqueName: \"kubernetes.io/projected/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-kube-api-access-hc4sm\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.111305 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-svc\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.111371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-config\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.111405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.112375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.112558 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.112567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.112566 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-svc\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.113247 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-config\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.136135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc4sm\" (UniqueName: \"kubernetes.io/projected/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-kube-api-access-hc4sm\") pod \"dnsmasq-dns-56766df65f-zzccz\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:11 crc kubenswrapper[4858]: I1122 07:41:11.149755 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:12 crc kubenswrapper[4858]: I1122 07:41:12.905282 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-zzccz"] Nov 22 07:41:13 crc kubenswrapper[4858]: I1122 07:41:13.436315 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerID="0831bf9af5102380822e9b6d1616c6fb76c18523b04a68f59a08de9d33c47fdb" exitCode=0 Nov 22 07:41:13 crc kubenswrapper[4858]: I1122 07:41:13.436441 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-zzccz" event={"ID":"2c500fbc-74fc-4766-9abf-ceb634c0e0a3","Type":"ContainerDied","Data":"0831bf9af5102380822e9b6d1616c6fb76c18523b04a68f59a08de9d33c47fdb"} Nov 22 07:41:13 crc kubenswrapper[4858]: I1122 07:41:13.436760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-zzccz" event={"ID":"2c500fbc-74fc-4766-9abf-ceb634c0e0a3","Type":"ContainerStarted","Data":"b3ba70f12097c50e34e23f2f1ceddccd359d4cf00cbe24730f5aebc3caf54f06"} Nov 22 07:41:13 crc kubenswrapper[4858]: I1122 07:41:13.441122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-djszx" event={"ID":"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647","Type":"ContainerStarted","Data":"aa40f6ffa3b5047db31ed930a0581a3ab393038f8637f6aa84f0906dfaa6ab25"} Nov 22 07:41:13 crc kubenswrapper[4858]: I1122 07:41:13.497795 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-djszx" podStartSLOduration=2.7710365660000003 podStartE2EDuration="47.497756105s" podCreationTimestamp="2025-11-22 07:40:26 +0000 UTC" firstStartedPulling="2025-11-22 07:40:27.539524201 +0000 UTC m=+1789.380947207" lastFinishedPulling="2025-11-22 07:41:12.26624374 +0000 UTC m=+1834.107666746" observedRunningTime="2025-11-22 07:41:13.49166593 +0000 UTC m=+1835.333088936" watchObservedRunningTime="2025-11-22 07:41:13.497756105 +0000 UTC m=+1835.339179131" Nov 22 07:41:14 crc kubenswrapper[4858]: I1122 07:41:14.452433 4858 generic.go:334] "Generic (PLEG): container finished" podID="43b2326f-6238-4686-8e42-5bd33c074357" containerID="e861834850f977b602f18ed8d17b254529dd837b73735c0bbac78e6b2b23be6f" exitCode=0 Nov 22 07:41:14 crc kubenswrapper[4858]: I1122 07:41:14.452630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxstk" event={"ID":"43b2326f-6238-4686-8e42-5bd33c074357","Type":"ContainerDied","Data":"e861834850f977b602f18ed8d17b254529dd837b73735c0bbac78e6b2b23be6f"} Nov 22 07:41:14 crc kubenswrapper[4858]: I1122 07:41:14.457267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-zzccz" event={"ID":"2c500fbc-74fc-4766-9abf-ceb634c0e0a3","Type":"ContainerStarted","Data":"840fbf0b502336483f72df7129cf7f67d23563c53f4ea69c82ffdd06ebd15d2a"} Nov 22 07:41:14 crc kubenswrapper[4858]: I1122 07:41:14.457461 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.844603 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxstk" Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.882421 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56766df65f-zzccz" podStartSLOduration=5.882390662 podStartE2EDuration="5.882390662s" podCreationTimestamp="2025-11-22 07:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:14.505676374 +0000 UTC m=+1836.347099380" watchObservedRunningTime="2025-11-22 07:41:15.882390662 +0000 UTC m=+1837.723813678" Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.935022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/43b2326f-6238-4686-8e42-5bd33c074357-kube-api-access-cq4wd\") pod \"43b2326f-6238-4686-8e42-5bd33c074357\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.935123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-combined-ca-bundle\") pod \"43b2326f-6238-4686-8e42-5bd33c074357\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.935369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-config-data\") pod \"43b2326f-6238-4686-8e42-5bd33c074357\" (UID: \"43b2326f-6238-4686-8e42-5bd33c074357\") " Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.941766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b2326f-6238-4686-8e42-5bd33c074357-kube-api-access-cq4wd" (OuterVolumeSpecName: "kube-api-access-cq4wd") pod "43b2326f-6238-4686-8e42-5bd33c074357" (UID: "43b2326f-6238-4686-8e42-5bd33c074357"). InnerVolumeSpecName "kube-api-access-cq4wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.962232 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43b2326f-6238-4686-8e42-5bd33c074357" (UID: "43b2326f-6238-4686-8e42-5bd33c074357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:41:15 crc kubenswrapper[4858]: I1122 07:41:15.986192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-config-data" (OuterVolumeSpecName: "config-data") pod "43b2326f-6238-4686-8e42-5bd33c074357" (UID: "43b2326f-6238-4686-8e42-5bd33c074357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.037176 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/43b2326f-6238-4686-8e42-5bd33c074357-kube-api-access-cq4wd\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.037242 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.037256 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b2326f-6238-4686-8e42-5bd33c074357-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.475896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sxstk" event={"ID":"43b2326f-6238-4686-8e42-5bd33c074357","Type":"ContainerDied","Data":"43eeec396a23286bb8251cc8f4d9a2557b8ab41a8b82d96096f423d26214043d"} Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.475947 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43eeec396a23286bb8251cc8f4d9a2557b8ab41a8b82d96096f423d26214043d" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.476024 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sxstk" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.756636 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-zzccz"] Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.757094 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56766df65f-zzccz" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerName="dnsmasq-dns" containerID="cri-o://840fbf0b502336483f72df7129cf7f67d23563c53f4ea69c82ffdd06ebd15d2a" gracePeriod=10 Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.794301 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ddbfc445f-s5kfv"] Nov 22 07:41:16 crc kubenswrapper[4858]: E1122 07:41:16.794829 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b2326f-6238-4686-8e42-5bd33c074357" containerName="keystone-db-sync" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.795151 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b2326f-6238-4686-8e42-5bd33c074357" containerName="keystone-db-sync" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.795554 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b2326f-6238-4686-8e42-5bd33c074357" containerName="keystone-db-sync" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.796693 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.820960 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ddbfc445f-s5kfv"] Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.843893 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gwwlj"] Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.849949 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.852006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-config\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.852061 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-nb\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.852092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8d4\" (UniqueName: \"kubernetes.io/projected/15468583-d4db-4642-adef-71ebbdb0a68b-kube-api-access-5r8d4\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.852122 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-svc\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.852183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-sb\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.852219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-swift-storage-0\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.855551 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.855896 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.856113 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.856296 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8f6m6" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.856472 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.911437 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gwwlj"] Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-svc\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-credential-keys\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-sb\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954317 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-scripts\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-config-data\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-swift-storage-0\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-combined-ca-bundle\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7ph9\" (UniqueName: \"kubernetes.io/projected/c1b28c05-4791-460f-8788-b1674871efa0-kube-api-access-f7ph9\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-config\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-nb\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8d4\" (UniqueName: \"kubernetes.io/projected/15468583-d4db-4642-adef-71ebbdb0a68b-kube-api-access-5r8d4\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.954677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-fernet-keys\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.955694 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-sb\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.956723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-swift-storage-0\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.956934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-svc\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.960117 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-config\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:16 crc kubenswrapper[4858]: I1122 07:41:16.970482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-nb\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.001667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8d4\" (UniqueName: \"kubernetes.io/projected/15468583-d4db-4642-adef-71ebbdb0a68b-kube-api-access-5r8d4\") pod \"dnsmasq-dns-6ddbfc445f-s5kfv\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.059148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7ph9\" (UniqueName: \"kubernetes.io/projected/c1b28c05-4791-460f-8788-b1674871efa0-kube-api-access-f7ph9\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.059237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-fernet-keys\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.059324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-credential-keys\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.059494 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-scripts\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.059537 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-config-data\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.059591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-combined-ca-bundle\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.063650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-combined-ca-bundle\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.071729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-fernet-keys\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.080915 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-scripts\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.089921 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-credential-keys\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.093988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-config-data\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.096321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7ph9\" (UniqueName: \"kubernetes.io/projected/c1b28c05-4791-460f-8788-b1674871efa0-kube-api-access-f7ph9\") pod \"keystone-bootstrap-gwwlj\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.113961 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4c8pg"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.115400 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.118921 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.119703 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.120670 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d4vvf" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.121984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.158799 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4c8pg"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.165362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-combined-ca-bundle\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.165435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9x9v\" (UniqueName: \"kubernetes.io/projected/12958341-df4b-4746-9621-04a44a4dafea-kube-api-access-x9x9v\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.165542 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-config\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.203961 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hfjnq"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.215361 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.225813 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dpj5x" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.226483 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.226835 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.236553 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dmpsm"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.250049 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.251313 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dmpsm"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.251527 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.260596 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.261828 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x9z2x" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.277850 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hfjnq"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-config-data\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-config\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-db-sync-config-data\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7kfj\" (UniqueName: \"kubernetes.io/projected/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-kube-api-access-g7kfj\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-db-sync-config-data\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-combined-ca-bundle\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-combined-ca-bundle\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.279932 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-scripts\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.280013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-etc-machine-id\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.280086 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-combined-ca-bundle\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.280159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhwsg\" (UniqueName: \"kubernetes.io/projected/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-kube-api-access-vhwsg\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.280238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9x9v\" (UniqueName: \"kubernetes.io/projected/12958341-df4b-4746-9621-04a44a4dafea-kube-api-access-x9x9v\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.304808 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.305966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-config\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.307277 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.315886 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ddbfc445f-s5kfv"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.318722 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.317554 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.328012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-combined-ca-bundle\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.343153 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.349102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9x9v\" (UniqueName: \"kubernetes.io/projected/12958341-df4b-4746-9621-04a44a4dafea-kube-api-access-x9x9v\") pod \"neutron-db-sync-4c8pg\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.386483 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rkx92"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.387746 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-config-data\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.387837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-db-sync-config-data\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7kfj\" (UniqueName: \"kubernetes.io/projected/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-kube-api-access-g7kfj\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-db-sync-config-data\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-combined-ca-bundle\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-combined-ca-bundle\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388484 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-scripts\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-etc-machine-id\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.388669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhwsg\" (UniqueName: \"kubernetes.io/projected/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-kube-api-access-vhwsg\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.396512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-config-data\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.401505 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-etc-machine-id\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.403131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-combined-ca-bundle\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.408729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-db-sync-config-data\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.413507 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-combined-ca-bundle\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.414309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.426787 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-scripts\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.428467 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2rw2b" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.428715 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.429022 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.470970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-db-sync-config-data\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.471637 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7kfj\" (UniqueName: \"kubernetes.io/projected/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-kube-api-access-g7kfj\") pod \"cinder-db-sync-hfjnq\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.475410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhwsg\" (UniqueName: \"kubernetes.io/projected/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-kube-api-access-vhwsg\") pod \"barbican-db-sync-dmpsm\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.483627 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bdb874957-96wfv"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.487638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.492695 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-run-httpd\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.492909 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.492970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-scripts\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.493407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcwq\" (UniqueName: \"kubernetes.io/projected/d0a74856-97e8-4850-8b13-4fc1a6523ae6-kube-api-access-6jcwq\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.493915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-config-data\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.494644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.494682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-log-httpd\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.506851 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rkx92"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.533430 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bdb874957-96wfv"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.551184 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerID="840fbf0b502336483f72df7129cf7f67d23563c53f4ea69c82ffdd06ebd15d2a" exitCode=0 Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.563970 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.567120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-zzccz" event={"ID":"2c500fbc-74fc-4766-9abf-ceb634c0e0a3","Type":"ContainerDied","Data":"840fbf0b502336483f72df7129cf7f67d23563c53f4ea69c82ffdd06ebd15d2a"} Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-config-data\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcwq\" (UniqueName: \"kubernetes.io/projected/d0a74856-97e8-4850-8b13-4fc1a6523ae6-kube-api-access-6jcwq\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596179 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-svc\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-config-data\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-config\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-combined-ca-bundle\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-log-httpd\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-scripts\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4w8l\" (UniqueName: \"kubernetes.io/projected/d41812ee-66ac-438e-82b5-cb404aa95294-kube-api-access-v4w8l\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7d5\" (UniqueName: \"kubernetes.io/projected/24a5bc60-0e0b-4a28-88b7-49321247f37a-kube-api-access-7z7d5\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41812ee-66ac-438e-82b5-cb404aa95294-logs\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-run-httpd\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.596571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-scripts\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.599240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-log-httpd\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.600159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-run-httpd\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.603167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-config-data\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.610593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-scripts\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.616715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.623075 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.636835 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcwq\" (UniqueName: \"kubernetes.io/projected/d0a74856-97e8-4850-8b13-4fc1a6523ae6-kube-api-access-6jcwq\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.636872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.651381 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.652775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.698477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-scripts\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.699567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.703491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4w8l\" (UniqueName: \"kubernetes.io/projected/d41812ee-66ac-438e-82b5-cb404aa95294-kube-api-access-v4w8l\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7d5\" (UniqueName: \"kubernetes.io/projected/24a5bc60-0e0b-4a28-88b7-49321247f37a-kube-api-access-7z7d5\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41812ee-66ac-438e-82b5-cb404aa95294-logs\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-config-data\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-svc\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-config\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-combined-ca-bundle\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.704659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.705527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.706262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41812ee-66ac-438e-82b5-cb404aa95294-logs\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.707975 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.710944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.711668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-config\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.713445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-config-data\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.713750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-combined-ca-bundle\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.715716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-svc\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.729536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-scripts\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.729675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7d5\" (UniqueName: \"kubernetes.io/projected/24a5bc60-0e0b-4a28-88b7-49321247f37a-kube-api-access-7z7d5\") pod \"dnsmasq-dns-6bdb874957-96wfv\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.736262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4w8l\" (UniqueName: \"kubernetes.io/projected/d41812ee-66ac-438e-82b5-cb404aa95294-kube-api-access-v4w8l\") pod \"placement-db-sync-rkx92\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.764661 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rkx92" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.821365 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.822068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-config\") pod \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.822136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-swift-storage-0\") pod \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.822159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-svc\") pod \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.822224 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-nb\") pod \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.822296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc4sm\" (UniqueName: \"kubernetes.io/projected/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-kube-api-access-hc4sm\") pod \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.822647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-sb\") pod \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\" (UID: \"2c500fbc-74fc-4766-9abf-ceb634c0e0a3\") " Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.833638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-kube-api-access-hc4sm" (OuterVolumeSpecName: "kube-api-access-hc4sm") pod "2c500fbc-74fc-4766-9abf-ceb634c0e0a3" (UID: "2c500fbc-74fc-4766-9abf-ceb634c0e0a3"). InnerVolumeSpecName "kube-api-access-hc4sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.887009 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ddbfc445f-s5kfv"] Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.914246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c500fbc-74fc-4766-9abf-ceb634c0e0a3" (UID: "2c500fbc-74fc-4766-9abf-ceb634c0e0a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.926720 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.926772 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc4sm\" (UniqueName: \"kubernetes.io/projected/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-kube-api-access-hc4sm\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.930404 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-config" (OuterVolumeSpecName: "config") pod "2c500fbc-74fc-4766-9abf-ceb634c0e0a3" (UID: "2c500fbc-74fc-4766-9abf-ceb634c0e0a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.936216 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c500fbc-74fc-4766-9abf-ceb634c0e0a3" (UID: "2c500fbc-74fc-4766-9abf-ceb634c0e0a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.936401 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c500fbc-74fc-4766-9abf-ceb634c0e0a3" (UID: "2c500fbc-74fc-4766-9abf-ceb634c0e0a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:17 crc kubenswrapper[4858]: I1122 07:41:17.953713 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c500fbc-74fc-4766-9abf-ceb634c0e0a3" (UID: "2c500fbc-74fc-4766-9abf-ceb634c0e0a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:17 crc kubenswrapper[4858]: W1122 07:41:17.963398 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15468583_d4db_4642_adef_71ebbdb0a68b.slice/crio-6ef7e6df44beacadf0c87288c3d2319c8883395b71feb5a9b90b4f99a7044687 WatchSource:0}: Error finding container 6ef7e6df44beacadf0c87288c3d2319c8883395b71feb5a9b90b4f99a7044687: Status 404 returned error can't find the container with id 6ef7e6df44beacadf0c87288c3d2319c8883395b71feb5a9b90b4f99a7044687 Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.050852 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.050890 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.050904 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.050915 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c500fbc-74fc-4766-9abf-ceb634c0e0a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.418062 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gwwlj"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.597129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gwwlj" event={"ID":"c1b28c05-4791-460f-8788-b1674871efa0","Type":"ContainerStarted","Data":"3132752816e1bbd50ea728333a373a9153b17d22dcf8b53d01383e6deffcc312"} Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.613108 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-zzccz" Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.613899 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4c8pg"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.614019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-zzccz" event={"ID":"2c500fbc-74fc-4766-9abf-ceb634c0e0a3","Type":"ContainerDied","Data":"b3ba70f12097c50e34e23f2f1ceddccd359d4cf00cbe24730f5aebc3caf54f06"} Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.614155 4858 scope.go:117] "RemoveContainer" containerID="840fbf0b502336483f72df7129cf7f67d23563c53f4ea69c82ffdd06ebd15d2a" Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.637176 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dmpsm"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.644685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" event={"ID":"15468583-d4db-4642-adef-71ebbdb0a68b","Type":"ContainerStarted","Data":"6ef7e6df44beacadf0c87288c3d2319c8883395b71feb5a9b90b4f99a7044687"} Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.692879 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hfjnq"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.699293 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-zzccz"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.722766 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-zzccz"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.783017 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.954806 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rkx92"] Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.983909 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:41:18 crc kubenswrapper[4858]: I1122 07:41:18.984534 4858 scope.go:117] "RemoveContainer" containerID="0831bf9af5102380822e9b6d1616c6fb76c18523b04a68f59a08de9d33c47fdb" Nov 22 07:41:19 crc kubenswrapper[4858]: W1122 07:41:19.038146 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41812ee_66ac_438e_82b5_cb404aa95294.slice/crio-c6d7ddc53b844ccdd079ecadeb7db4530eb9d19c7915ecefe1f3f45f3ed9e287 WatchSource:0}: Error finding container c6d7ddc53b844ccdd079ecadeb7db4530eb9d19c7915ecefe1f3f45f3ed9e287: Status 404 returned error can't find the container with id c6d7ddc53b844ccdd079ecadeb7db4530eb9d19c7915ecefe1f3f45f3ed9e287 Nov 22 07:41:19 crc kubenswrapper[4858]: W1122 07:41:19.040149 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0a74856_97e8_4850_8b13_4fc1a6523ae6.slice/crio-4cd58ddc497f67537d111c9661db6514b5a697ca48414500df4e753e2ee4a6ce WatchSource:0}: Error finding container 4cd58ddc497f67537d111c9661db6514b5a697ca48414500df4e753e2ee4a6ce: Status 404 returned error can't find the container with id 4cd58ddc497f67537d111c9661db6514b5a697ca48414500df4e753e2ee4a6ce Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.259544 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bdb874957-96wfv"] Nov 22 07:41:19 crc kubenswrapper[4858]: W1122 07:41:19.261422 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a5bc60_0e0b_4a28_88b7_49321247f37a.slice/crio-d0356efe655ce8d9e540887fbe58eab7f2cc027e1b20e931b6f9c3a1a21b2b51 WatchSource:0}: Error finding container d0356efe655ce8d9e540887fbe58eab7f2cc027e1b20e931b6f9c3a1a21b2b51: Status 404 returned error can't find the container with id d0356efe655ce8d9e540887fbe58eab7f2cc027e1b20e931b6f9c3a1a21b2b51 Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.556419 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" path="/var/lib/kubelet/pods/2c500fbc-74fc-4766-9abf-ceb634c0e0a3/volumes" Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.669055 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.682453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rkx92" event={"ID":"d41812ee-66ac-438e-82b5-cb404aa95294","Type":"ContainerStarted","Data":"c6d7ddc53b844ccdd079ecadeb7db4530eb9d19c7915ecefe1f3f45f3ed9e287"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.697869 4858 generic.go:334] "Generic (PLEG): container finished" podID="15468583-d4db-4642-adef-71ebbdb0a68b" containerID="1989a89bf4312be1530c6baaccb37d90744546f0e3626a87872f5f316c779ceb" exitCode=0 Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.698291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" event={"ID":"15468583-d4db-4642-adef-71ebbdb0a68b","Type":"ContainerDied","Data":"1989a89bf4312be1530c6baaccb37d90744546f0e3626a87872f5f316c779ceb"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.704058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmpsm" event={"ID":"854da42b-c1a7-4390-91cf-2fa7fa3e8eab","Type":"ContainerStarted","Data":"c4f4bcd9c12b95cad57ec8980f43b2386b6a282ffd95f74b02a56e0761a6ed99"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.711170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hfjnq" event={"ID":"6f5e0507-55cd-49e4-bf31-1e13d0bfee53","Type":"ContainerStarted","Data":"7040ee90c8c4ef20bef095ba75b61745a4664c7e7d7ba5855b21168a60cbb2ee"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.731064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" event={"ID":"24a5bc60-0e0b-4a28-88b7-49321247f37a","Type":"ContainerStarted","Data":"d0356efe655ce8d9e540887fbe58eab7f2cc027e1b20e931b6f9c3a1a21b2b51"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.736273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4c8pg" event={"ID":"12958341-df4b-4746-9621-04a44a4dafea","Type":"ContainerStarted","Data":"7424937b63e055893b5aae4bd3bd82c0b7a1388a0f97c8f17d97e275fc381ff3"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.736360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4c8pg" event={"ID":"12958341-df4b-4746-9621-04a44a4dafea","Type":"ContainerStarted","Data":"3fd346777c9f9fbeccf5f5ac7165fc3c1d38c06dc4e6623ea5d675635711af7d"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.743522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0a74856-97e8-4850-8b13-4fc1a6523ae6","Type":"ContainerStarted","Data":"4cd58ddc497f67537d111c9661db6514b5a697ca48414500df4e753e2ee4a6ce"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.747253 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gwwlj" event={"ID":"c1b28c05-4791-460f-8788-b1674871efa0","Type":"ContainerStarted","Data":"18b6954890d3d3bafc895e4e66130cfcc62022719c2856ccabe8b729e4c34b20"} Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.869706 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4c8pg" podStartSLOduration=2.869667626 podStartE2EDuration="2.869667626s" podCreationTimestamp="2025-11-22 07:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:19.862765535 +0000 UTC m=+1841.704188541" watchObservedRunningTime="2025-11-22 07:41:19.869667626 +0000 UTC m=+1841.711090632" Nov 22 07:41:19 crc kubenswrapper[4858]: I1122 07:41:19.930168 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gwwlj" podStartSLOduration=3.930138854 podStartE2EDuration="3.930138854s" podCreationTimestamp="2025-11-22 07:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:19.916011331 +0000 UTC m=+1841.757434337" watchObservedRunningTime="2025-11-22 07:41:19.930138854 +0000 UTC m=+1841.771561870" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.316472 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.337537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-nb\") pod \"15468583-d4db-4642-adef-71ebbdb0a68b\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.338238 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-config\") pod \"15468583-d4db-4642-adef-71ebbdb0a68b\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.338303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r8d4\" (UniqueName: \"kubernetes.io/projected/15468583-d4db-4642-adef-71ebbdb0a68b-kube-api-access-5r8d4\") pod \"15468583-d4db-4642-adef-71ebbdb0a68b\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.338393 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-sb\") pod \"15468583-d4db-4642-adef-71ebbdb0a68b\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.338601 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-svc\") pod \"15468583-d4db-4642-adef-71ebbdb0a68b\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.338658 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-swift-storage-0\") pod \"15468583-d4db-4642-adef-71ebbdb0a68b\" (UID: \"15468583-d4db-4642-adef-71ebbdb0a68b\") " Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.364560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15468583-d4db-4642-adef-71ebbdb0a68b-kube-api-access-5r8d4" (OuterVolumeSpecName: "kube-api-access-5r8d4") pod "15468583-d4db-4642-adef-71ebbdb0a68b" (UID: "15468583-d4db-4642-adef-71ebbdb0a68b"). InnerVolumeSpecName "kube-api-access-5r8d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.377287 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15468583-d4db-4642-adef-71ebbdb0a68b" (UID: "15468583-d4db-4642-adef-71ebbdb0a68b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.406075 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "15468583-d4db-4642-adef-71ebbdb0a68b" (UID: "15468583-d4db-4642-adef-71ebbdb0a68b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.406107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-config" (OuterVolumeSpecName: "config") pod "15468583-d4db-4642-adef-71ebbdb0a68b" (UID: "15468583-d4db-4642-adef-71ebbdb0a68b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.430993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15468583-d4db-4642-adef-71ebbdb0a68b" (UID: "15468583-d4db-4642-adef-71ebbdb0a68b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.441696 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.442112 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.442193 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r8d4\" (UniqueName: \"kubernetes.io/projected/15468583-d4db-4642-adef-71ebbdb0a68b-kube-api-access-5r8d4\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.442270 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.442376 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.473153 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15468583-d4db-4642-adef-71ebbdb0a68b" (UID: "15468583-d4db-4642-adef-71ebbdb0a68b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.544595 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15468583-d4db-4642-adef-71ebbdb0a68b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.771000 4858 generic.go:334] "Generic (PLEG): container finished" podID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerID="7310120c55b4ce88603e9cd0c7b4f626edcfddefd6eb6e17c47588cbd584448b" exitCode=0 Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.771108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" event={"ID":"24a5bc60-0e0b-4a28-88b7-49321247f37a","Type":"ContainerDied","Data":"7310120c55b4ce88603e9cd0c7b4f626edcfddefd6eb6e17c47588cbd584448b"} Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.779059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" event={"ID":"15468583-d4db-4642-adef-71ebbdb0a68b","Type":"ContainerDied","Data":"6ef7e6df44beacadf0c87288c3d2319c8883395b71feb5a9b90b4f99a7044687"} Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.779139 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ddbfc445f-s5kfv" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.779171 4858 scope.go:117] "RemoveContainer" containerID="1989a89bf4312be1530c6baaccb37d90744546f0e3626a87872f5f316c779ceb" Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.880273 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ddbfc445f-s5kfv"] Nov 22 07:41:20 crc kubenswrapper[4858]: I1122 07:41:20.889151 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ddbfc445f-s5kfv"] Nov 22 07:41:21 crc kubenswrapper[4858]: I1122 07:41:21.850201 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15468583-d4db-4642-adef-71ebbdb0a68b" path="/var/lib/kubelet/pods/15468583-d4db-4642-adef-71ebbdb0a68b/volumes" Nov 22 07:41:22 crc kubenswrapper[4858]: I1122 07:41:22.535802 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:41:22 crc kubenswrapper[4858]: E1122 07:41:22.536420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:41:23 crc kubenswrapper[4858]: I1122 07:41:23.814424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" event={"ID":"24a5bc60-0e0b-4a28-88b7-49321247f37a","Type":"ContainerStarted","Data":"44f43be9ee1e6688eafaa0de4640204cd8d01b20ac225fb286e7ec36253259ee"} Nov 22 07:41:24 crc kubenswrapper[4858]: I1122 07:41:24.824792 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:24 crc kubenswrapper[4858]: I1122 07:41:24.855715 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" podStartSLOduration=7.855695826 podStartE2EDuration="7.855695826s" podCreationTimestamp="2025-11-22 07:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:24.85363518 +0000 UTC m=+1846.695058196" watchObservedRunningTime="2025-11-22 07:41:24.855695826 +0000 UTC m=+1846.697118832" Nov 22 07:41:32 crc kubenswrapper[4858]: I1122 07:41:32.823563 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:41:32 crc kubenswrapper[4858]: I1122 07:41:32.927899 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-58wfb"] Nov 22 07:41:32 crc kubenswrapper[4858]: I1122 07:41:32.928244 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" containerID="cri-o://4312d0a90cc8263244016ad6ab9b8e2c56f8007d2ada5a13da0a81e22caa7617" gracePeriod=10 Nov 22 07:41:33 crc kubenswrapper[4858]: I1122 07:41:33.641396 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Nov 22 07:41:34 crc kubenswrapper[4858]: I1122 07:41:34.535600 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:41:34 crc kubenswrapper[4858]: E1122 07:41:34.535871 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:41:36 crc kubenswrapper[4858]: I1122 07:41:36.969827 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerID="4312d0a90cc8263244016ad6ab9b8e2c56f8007d2ada5a13da0a81e22caa7617" exitCode=0 Nov 22 07:41:36 crc kubenswrapper[4858]: I1122 07:41:36.969980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" event={"ID":"1f00ad69-2781-482a-aa97-43bfe1f33f76","Type":"ContainerDied","Data":"4312d0a90cc8263244016ad6ab9b8e2c56f8007d2ada5a13da0a81e22caa7617"} Nov 22 07:41:43 crc kubenswrapper[4858]: I1122 07:41:43.646535 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 22 07:41:47 crc kubenswrapper[4858]: I1122 07:41:47.536181 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:41:47 crc kubenswrapper[4858]: E1122 07:41:47.537367 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:41:48 crc kubenswrapper[4858]: I1122 07:41:48.649454 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 22 07:41:48 crc kubenswrapper[4858]: I1122 07:41:48.650831 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:41:53 crc kubenswrapper[4858]: I1122 07:41:53.651296 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 22 07:41:58 crc kubenswrapper[4858]: I1122 07:41:58.654007 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 22 07:42:00 crc kubenswrapper[4858]: I1122 07:42:00.535783 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:42:00 crc kubenswrapper[4858]: E1122 07:42:00.536152 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:42:03 crc kubenswrapper[4858]: E1122 07:42:03.485429 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 22 07:42:03 crc kubenswrapper[4858]: E1122 07:42:03.486014 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7kfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-hfjnq_openstack(6f5e0507-55cd-49e4-bf31-1e13d0bfee53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:42:03 crc kubenswrapper[4858]: E1122 07:42:03.487233 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-hfjnq" podUID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" Nov 22 07:42:03 crc kubenswrapper[4858]: I1122 07:42:03.655475 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 22 07:42:04 crc kubenswrapper[4858]: E1122 07:42:04.246453 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-hfjnq" podUID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" Nov 22 07:42:05 crc kubenswrapper[4858]: I1122 07:42:05.256261 4858 generic.go:334] "Generic (PLEG): container finished" podID="c1b28c05-4791-460f-8788-b1674871efa0" containerID="18b6954890d3d3bafc895e4e66130cfcc62022719c2856ccabe8b729e4c34b20" exitCode=0 Nov 22 07:42:05 crc kubenswrapper[4858]: I1122 07:42:05.256538 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gwwlj" event={"ID":"c1b28c05-4791-460f-8788-b1674871efa0","Type":"ContainerDied","Data":"18b6954890d3d3bafc895e4e66130cfcc62022719c2856ccabe8b729e4c34b20"} Nov 22 07:42:05 crc kubenswrapper[4858]: E1122 07:42:05.538051 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099" Nov 22 07:42:05 crc kubenswrapper[4858]: E1122 07:42:05.538718 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4w8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-rkx92_openstack(d41812ee-66ac-438e-82b5-cb404aa95294): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:42:05 crc kubenswrapper[4858]: E1122 07:42:05.539957 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-rkx92" podUID="d41812ee-66ac-438e-82b5-cb404aa95294" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.218750 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.281182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" event={"ID":"1f00ad69-2781-482a-aa97-43bfe1f33f76","Type":"ContainerDied","Data":"359bb93f6ef96b1c15c05fbea2aadd3db28b80e7fbece8dd84b61e69e46f103b"} Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.281267 4858 scope.go:117] "RemoveContainer" containerID="4312d0a90cc8263244016ad6ab9b8e2c56f8007d2ada5a13da0a81e22caa7617" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.281303 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" Nov 22 07:42:06 crc kubenswrapper[4858]: E1122 07:42:06.282940 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099\\\"\"" pod="openstack/placement-db-sync-rkx92" podUID="d41812ee-66ac-438e-82b5-cb404aa95294" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.333779 4858 scope.go:117] "RemoveContainer" containerID="87fe90dc85bee746441fd67018989c515e806015d6bbac9627638115bc9f8c88" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.369014 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g5cd\" (UniqueName: \"kubernetes.io/projected/1f00ad69-2781-482a-aa97-43bfe1f33f76-kube-api-access-5g5cd\") pod \"1f00ad69-2781-482a-aa97-43bfe1f33f76\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.369102 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-dns-svc\") pod \"1f00ad69-2781-482a-aa97-43bfe1f33f76\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.369130 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-sb\") pod \"1f00ad69-2781-482a-aa97-43bfe1f33f76\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.369360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-config\") pod \"1f00ad69-2781-482a-aa97-43bfe1f33f76\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.369415 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-nb\") pod \"1f00ad69-2781-482a-aa97-43bfe1f33f76\" (UID: \"1f00ad69-2781-482a-aa97-43bfe1f33f76\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.377300 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f00ad69-2781-482a-aa97-43bfe1f33f76-kube-api-access-5g5cd" (OuterVolumeSpecName: "kube-api-access-5g5cd") pod "1f00ad69-2781-482a-aa97-43bfe1f33f76" (UID: "1f00ad69-2781-482a-aa97-43bfe1f33f76"). InnerVolumeSpecName "kube-api-access-5g5cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.449434 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f00ad69-2781-482a-aa97-43bfe1f33f76" (UID: "1f00ad69-2781-482a-aa97-43bfe1f33f76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.462550 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f00ad69-2781-482a-aa97-43bfe1f33f76" (UID: "1f00ad69-2781-482a-aa97-43bfe1f33f76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.472515 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g5cd\" (UniqueName: \"kubernetes.io/projected/1f00ad69-2781-482a-aa97-43bfe1f33f76-kube-api-access-5g5cd\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.472582 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.472591 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.483256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-config" (OuterVolumeSpecName: "config") pod "1f00ad69-2781-482a-aa97-43bfe1f33f76" (UID: "1f00ad69-2781-482a-aa97-43bfe1f33f76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.485998 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f00ad69-2781-482a-aa97-43bfe1f33f76" (UID: "1f00ad69-2781-482a-aa97-43bfe1f33f76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.573901 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.573934 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f00ad69-2781-482a-aa97-43bfe1f33f76-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.617762 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-58wfb"] Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.632819 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-58wfb"] Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.818000 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.981156 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-credential-keys\") pod \"c1b28c05-4791-460f-8788-b1674871efa0\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.981571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7ph9\" (UniqueName: \"kubernetes.io/projected/c1b28c05-4791-460f-8788-b1674871efa0-kube-api-access-f7ph9\") pod \"c1b28c05-4791-460f-8788-b1674871efa0\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.981649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-scripts\") pod \"c1b28c05-4791-460f-8788-b1674871efa0\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.981681 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-fernet-keys\") pod \"c1b28c05-4791-460f-8788-b1674871efa0\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.981753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-combined-ca-bundle\") pod \"c1b28c05-4791-460f-8788-b1674871efa0\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.981802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-config-data\") pod \"c1b28c05-4791-460f-8788-b1674871efa0\" (UID: \"c1b28c05-4791-460f-8788-b1674871efa0\") " Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.986339 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c1b28c05-4791-460f-8788-b1674871efa0" (UID: "c1b28c05-4791-460f-8788-b1674871efa0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.986378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-scripts" (OuterVolumeSpecName: "scripts") pod "c1b28c05-4791-460f-8788-b1674871efa0" (UID: "c1b28c05-4791-460f-8788-b1674871efa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.987255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c1b28c05-4791-460f-8788-b1674871efa0" (UID: "c1b28c05-4791-460f-8788-b1674871efa0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:06 crc kubenswrapper[4858]: I1122 07:42:06.987518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b28c05-4791-460f-8788-b1674871efa0-kube-api-access-f7ph9" (OuterVolumeSpecName: "kube-api-access-f7ph9") pod "c1b28c05-4791-460f-8788-b1674871efa0" (UID: "c1b28c05-4791-460f-8788-b1674871efa0"). InnerVolumeSpecName "kube-api-access-f7ph9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.010749 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1b28c05-4791-460f-8788-b1674871efa0" (UID: "c1b28c05-4791-460f-8788-b1674871efa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.014316 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-config-data" (OuterVolumeSpecName: "config-data") pod "c1b28c05-4791-460f-8788-b1674871efa0" (UID: "c1b28c05-4791-460f-8788-b1674871efa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.085521 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.085609 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.085626 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7ph9\" (UniqueName: \"kubernetes.io/projected/c1b28c05-4791-460f-8788-b1674871efa0-kube-api-access-f7ph9\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.085639 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.085648 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.085657 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b28c05-4791-460f-8788-b1674871efa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.298518 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gwwlj" event={"ID":"c1b28c05-4791-460f-8788-b1674871efa0","Type":"ContainerDied","Data":"3132752816e1bbd50ea728333a373a9153b17d22dcf8b53d01383e6deffcc312"} Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.299443 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3132752816e1bbd50ea728333a373a9153b17d22dcf8b53d01383e6deffcc312" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.298749 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gwwlj" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.368217 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gwwlj"] Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.376074 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gwwlj"] Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473393 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4pmzl"] Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.473814 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerName="dnsmasq-dns" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473830 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerName="dnsmasq-dns" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.473851 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b28c05-4791-460f-8788-b1674871efa0" containerName="keystone-bootstrap" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473860 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b28c05-4791-460f-8788-b1674871efa0" containerName="keystone-bootstrap" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.473879 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473888 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.473905 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15468583-d4db-4642-adef-71ebbdb0a68b" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473913 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15468583-d4db-4642-adef-71ebbdb0a68b" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.473937 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473944 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.473959 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.473968 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.474176 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b28c05-4791-460f-8788-b1674871efa0" containerName="keystone-bootstrap" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.474200 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c500fbc-74fc-4766-9abf-ceb634c0e0a3" containerName="dnsmasq-dns" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.474224 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15468583-d4db-4642-adef-71ebbdb0a68b" containerName="init" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.474240 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.475184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.477812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.478925 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.480066 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8f6m6" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.480395 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.483000 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.491124 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4pmzl"] Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.547049 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" path="/var/lib/kubelet/pods/1f00ad69-2781-482a-aa97-43bfe1f33f76/volumes" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.547999 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b28c05-4791-460f-8788-b1674871efa0" path="/var/lib/kubelet/pods/c1b28c05-4791-460f-8788-b1674871efa0/volumes" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.593310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-combined-ca-bundle\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.593442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-fernet-keys\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.593492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-scripts\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.593518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-config-data\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.593797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-credential-keys\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.593890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxfgx\" (UniqueName: \"kubernetes.io/projected/b6475152-0db7-4069-a206-1b854a1529d1-kube-api-access-gxfgx\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.658193 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.658668 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vhwsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dmpsm_openstack(854da42b-c1a7-4390-91cf-2fa7fa3e8eab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:42:07 crc kubenswrapper[4858]: E1122 07:42:07.659928 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dmpsm" podUID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.696010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-fernet-keys\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.696102 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-scripts\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.696134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-config-data\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.696196 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-credential-keys\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.696217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxfgx\" (UniqueName: \"kubernetes.io/projected/b6475152-0db7-4069-a206-1b854a1529d1-kube-api-access-gxfgx\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.696249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-combined-ca-bundle\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.902098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxfgx\" (UniqueName: \"kubernetes.io/projected/b6475152-0db7-4069-a206-1b854a1529d1-kube-api-access-gxfgx\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.902481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-combined-ca-bundle\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.902636 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-fernet-keys\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.903485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-config-data\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.981265 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-credential-keys\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:07 crc kubenswrapper[4858]: I1122 07:42:07.981968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-scripts\") pod \"keystone-bootstrap-4pmzl\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:08 crc kubenswrapper[4858]: I1122 07:42:08.101695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:08 crc kubenswrapper[4858]: E1122 07:42:08.308288 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645\\\"\"" pod="openstack/barbican-db-sync-dmpsm" podUID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" Nov 22 07:42:08 crc kubenswrapper[4858]: I1122 07:42:08.537383 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4pmzl"] Nov 22 07:42:08 crc kubenswrapper[4858]: I1122 07:42:08.656392 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-58wfb" podUID="1f00ad69-2781-482a-aa97-43bfe1f33f76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Nov 22 07:42:09 crc kubenswrapper[4858]: I1122 07:42:09.316558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pmzl" event={"ID":"b6475152-0db7-4069-a206-1b854a1529d1","Type":"ContainerStarted","Data":"03247020450c55c9447576c0dd6795c81d264b55dc7d457d5afec7bf014f391c"} Nov 22 07:42:10 crc kubenswrapper[4858]: E1122 07:42:10.266025 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:08fad4a9f449c4a3be062addbb554562e3445b584a69c7dcbc2d322db57ff6f3: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ceilometer-central/blobs/sha256:08fad4a9f449c4a3be062addbb554562e3445b584a69c7dcbc2d322db57ff6f3\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140" Nov 22 07:42:10 crc kubenswrapper[4858]: E1122 07:42:10.266529 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8fh7hd6h89hd8hf6h5f5h5d4hbhd7h678h686h5b6h5b8h65bh67bh5cdh58chdfh74hbch655h69h657h67bh595h689h658h645hbch65dh95q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d0a74856-97e8-4850-8b13-4fc1a6523ae6): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:08fad4a9f449c4a3be062addbb554562e3445b584a69c7dcbc2d322db57ff6f3: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ceilometer-central/blobs/sha256:08fad4a9f449c4a3be062addbb554562e3445b584a69c7dcbc2d322db57ff6f3\": context canceled" logger="UnhandledError" Nov 22 07:42:10 crc kubenswrapper[4858]: I1122 07:42:10.329348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pmzl" event={"ID":"b6475152-0db7-4069-a206-1b854a1529d1","Type":"ContainerStarted","Data":"8762094dad89b147d20563b6fe92d61a77318c8599c070e5c78908fdb39ce0f7"} Nov 22 07:42:10 crc kubenswrapper[4858]: I1122 07:42:10.348634 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4pmzl" podStartSLOduration=3.348613379 podStartE2EDuration="3.348613379s" podCreationTimestamp="2025-11-22 07:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:42:10.346653736 +0000 UTC m=+1892.188076742" watchObservedRunningTime="2025-11-22 07:42:10.348613379 +0000 UTC m=+1892.190036375" Nov 22 07:42:14 crc kubenswrapper[4858]: I1122 07:42:14.535267 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:42:14 crc kubenswrapper[4858]: E1122 07:42:14.537285 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:42:17 crc kubenswrapper[4858]: I1122 07:42:17.397375 4858 generic.go:334] "Generic (PLEG): container finished" podID="b6475152-0db7-4069-a206-1b854a1529d1" containerID="8762094dad89b147d20563b6fe92d61a77318c8599c070e5c78908fdb39ce0f7" exitCode=0 Nov 22 07:42:17 crc kubenswrapper[4858]: I1122 07:42:17.397503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pmzl" event={"ID":"b6475152-0db7-4069-a206-1b854a1529d1","Type":"ContainerDied","Data":"8762094dad89b147d20563b6fe92d61a77318c8599c070e5c78908fdb39ce0f7"} Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.414702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0a74856-97e8-4850-8b13-4fc1a6523ae6","Type":"ContainerStarted","Data":"ae8cac7ee6fb9bf5ba53845a562b7108b049035825a439784784314c390b08ae"} Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.812878 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.844203 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-config-data\") pod \"b6475152-0db7-4069-a206-1b854a1529d1\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.844280 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-combined-ca-bundle\") pod \"b6475152-0db7-4069-a206-1b854a1529d1\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.844535 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-fernet-keys\") pod \"b6475152-0db7-4069-a206-1b854a1529d1\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.844602 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxfgx\" (UniqueName: \"kubernetes.io/projected/b6475152-0db7-4069-a206-1b854a1529d1-kube-api-access-gxfgx\") pod \"b6475152-0db7-4069-a206-1b854a1529d1\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.844664 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-credential-keys\") pod \"b6475152-0db7-4069-a206-1b854a1529d1\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.844753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-scripts\") pod \"b6475152-0db7-4069-a206-1b854a1529d1\" (UID: \"b6475152-0db7-4069-a206-1b854a1529d1\") " Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.850657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-scripts" (OuterVolumeSpecName: "scripts") pod "b6475152-0db7-4069-a206-1b854a1529d1" (UID: "b6475152-0db7-4069-a206-1b854a1529d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.862867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6475152-0db7-4069-a206-1b854a1529d1-kube-api-access-gxfgx" (OuterVolumeSpecName: "kube-api-access-gxfgx") pod "b6475152-0db7-4069-a206-1b854a1529d1" (UID: "b6475152-0db7-4069-a206-1b854a1529d1"). InnerVolumeSpecName "kube-api-access-gxfgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.863162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b6475152-0db7-4069-a206-1b854a1529d1" (UID: "b6475152-0db7-4069-a206-1b854a1529d1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.871918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b6475152-0db7-4069-a206-1b854a1529d1" (UID: "b6475152-0db7-4069-a206-1b854a1529d1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.880989 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-config-data" (OuterVolumeSpecName: "config-data") pod "b6475152-0db7-4069-a206-1b854a1529d1" (UID: "b6475152-0db7-4069-a206-1b854a1529d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.883879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6475152-0db7-4069-a206-1b854a1529d1" (UID: "b6475152-0db7-4069-a206-1b854a1529d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.946845 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.947133 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxfgx\" (UniqueName: \"kubernetes.io/projected/b6475152-0db7-4069-a206-1b854a1529d1-kube-api-access-gxfgx\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.947213 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.947335 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.947424 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4858]: I1122 07:42:18.947503 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6475152-0db7-4069-a206-1b854a1529d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.432433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pmzl" event={"ID":"b6475152-0db7-4069-a206-1b854a1529d1","Type":"ContainerDied","Data":"03247020450c55c9447576c0dd6795c81d264b55dc7d457d5afec7bf014f391c"} Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.432494 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03247020450c55c9447576c0dd6795c81d264b55dc7d457d5afec7bf014f391c" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.432574 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pmzl" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.536661 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b67c6cff8-nl4sb"] Nov 22 07:42:19 crc kubenswrapper[4858]: E1122 07:42:19.537504 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6475152-0db7-4069-a206-1b854a1529d1" containerName="keystone-bootstrap" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.537533 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6475152-0db7-4069-a206-1b854a1529d1" containerName="keystone-bootstrap" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.537895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6475152-0db7-4069-a206-1b854a1529d1" containerName="keystone-bootstrap" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.538862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.552008 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8f6m6" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.553250 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.555859 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.556154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.556381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.560200 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.570862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-combined-ca-bundle\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.570951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-internal-tls-certs\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.570992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-credential-keys\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.571061 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-config-data\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.571102 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnbz\" (UniqueName: \"kubernetes.io/projected/f4d4fda9-31aa-46b8-983a-ffa32db2516c-kube-api-access-9nnbz\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.571129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-scripts\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.571151 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-public-tls-certs\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.571217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-fernet-keys\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.635552 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b67c6cff8-nl4sb"] Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.673636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-config-data\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.673778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnbz\" (UniqueName: \"kubernetes.io/projected/f4d4fda9-31aa-46b8-983a-ffa32db2516c-kube-api-access-9nnbz\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.673809 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-scripts\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.673840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-public-tls-certs\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.673920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-fernet-keys\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.673987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-combined-ca-bundle\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.674027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-internal-tls-certs\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.674054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-credential-keys\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.683092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-config-data\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.683743 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-internal-tls-certs\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.684089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-combined-ca-bundle\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.686246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-scripts\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.689550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-public-tls-certs\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.691573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-credential-keys\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.692252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-fernet-keys\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.709685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnbz\" (UniqueName: \"kubernetes.io/projected/f4d4fda9-31aa-46b8-983a-ffa32db2516c-kube-api-access-9nnbz\") pod \"keystone-7b67c6cff8-nl4sb\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:19 crc kubenswrapper[4858]: I1122 07:42:19.890206 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:20 crc kubenswrapper[4858]: I1122 07:42:20.423731 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b67c6cff8-nl4sb"] Nov 22 07:42:21 crc kubenswrapper[4858]: I1122 07:42:21.465681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b67c6cff8-nl4sb" event={"ID":"f4d4fda9-31aa-46b8-983a-ffa32db2516c","Type":"ContainerStarted","Data":"4b2278b5a2b63a8809b3b18c14d3d73fbbf028ec81bae4f82dec2b606ada88b7"} Nov 22 07:42:21 crc kubenswrapper[4858]: I1122 07:42:21.466127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b67c6cff8-nl4sb" event={"ID":"f4d4fda9-31aa-46b8-983a-ffa32db2516c","Type":"ContainerStarted","Data":"520f9aba91bdcb850898081ba1e44197856612df9eaf0f1a4bd49d34d96bab94"} Nov 22 07:42:22 crc kubenswrapper[4858]: I1122 07:42:22.478286 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:23 crc kubenswrapper[4858]: I1122 07:42:23.612427 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b67c6cff8-nl4sb" podStartSLOduration=4.612391544 podStartE2EDuration="4.612391544s" podCreationTimestamp="2025-11-22 07:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:42:22.508614739 +0000 UTC m=+1904.350037745" watchObservedRunningTime="2025-11-22 07:42:23.612391544 +0000 UTC m=+1905.453814550" Nov 22 07:42:28 crc kubenswrapper[4858]: I1122 07:42:28.536420 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:42:28 crc kubenswrapper[4858]: E1122 07:42:28.537336 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:42:34 crc kubenswrapper[4858]: E1122 07:42:34.226561 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1" Nov 22 07:42:34 crc kubenswrapper[4858]: E1122 07:42:34.228251 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jcwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d0a74856-97e8-4850-8b13-4fc1a6523ae6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:42:34 crc kubenswrapper[4858]: I1122 07:42:34.602769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hfjnq" event={"ID":"6f5e0507-55cd-49e4-bf31-1e13d0bfee53","Type":"ContainerStarted","Data":"0be496c05b6ca9bbc0552d43b838acc7ab82ea2f2a395f854baaaaee0619ac0a"} Nov 22 07:42:34 crc kubenswrapper[4858]: I1122 07:42:34.631582 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hfjnq" podStartSLOduration=13.172224119 podStartE2EDuration="1m17.631552394s" podCreationTimestamp="2025-11-22 07:41:17 +0000 UTC" firstStartedPulling="2025-11-22 07:41:19.033001724 +0000 UTC m=+1840.874424730" lastFinishedPulling="2025-11-22 07:42:23.492329989 +0000 UTC m=+1905.333753005" observedRunningTime="2025-11-22 07:42:34.622739462 +0000 UTC m=+1916.464162468" watchObservedRunningTime="2025-11-22 07:42:34.631552394 +0000 UTC m=+1916.472975400" Nov 22 07:42:36 crc kubenswrapper[4858]: I1122 07:42:36.642764 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmpsm" event={"ID":"854da42b-c1a7-4390-91cf-2fa7fa3e8eab","Type":"ContainerStarted","Data":"52223809a6d6bfb7225e42121de5c27970a68606da724fbdc5f05682783c72f0"} Nov 22 07:42:36 crc kubenswrapper[4858]: I1122 07:42:36.645400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rkx92" event={"ID":"d41812ee-66ac-438e-82b5-cb404aa95294","Type":"ContainerStarted","Data":"f20acacb794a33f3c4580766d27a38e6353236383e5589415e8e4d4c9d95c565"} Nov 22 07:42:36 crc kubenswrapper[4858]: I1122 07:42:36.670393 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dmpsm" podStartSLOduration=2.799011499 podStartE2EDuration="1m19.670360198s" podCreationTimestamp="2025-11-22 07:41:17 +0000 UTC" firstStartedPulling="2025-11-22 07:41:18.983591271 +0000 UTC m=+1840.825014277" lastFinishedPulling="2025-11-22 07:42:35.85493997 +0000 UTC m=+1917.696362976" observedRunningTime="2025-11-22 07:42:36.665171031 +0000 UTC m=+1918.506594037" watchObservedRunningTime="2025-11-22 07:42:36.670360198 +0000 UTC m=+1918.511783224" Nov 22 07:42:36 crc kubenswrapper[4858]: I1122 07:42:36.695295 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rkx92" podStartSLOduration=2.908365623 podStartE2EDuration="1m19.695269176s" podCreationTimestamp="2025-11-22 07:41:17 +0000 UTC" firstStartedPulling="2025-11-22 07:41:19.065963941 +0000 UTC m=+1840.907386947" lastFinishedPulling="2025-11-22 07:42:35.852867504 +0000 UTC m=+1917.694290500" observedRunningTime="2025-11-22 07:42:36.686430343 +0000 UTC m=+1918.527853379" watchObservedRunningTime="2025-11-22 07:42:36.695269176 +0000 UTC m=+1918.536692182" Nov 22 07:42:39 crc kubenswrapper[4858]: I1122 07:42:39.544405 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:42:39 crc kubenswrapper[4858]: E1122 07:42:39.547097 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:42:49 crc kubenswrapper[4858]: E1122 07:42:49.716227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:08fad4a9f449c4a3be062addbb554562e3445b584a69c7dcbc2d322db57ff6f3: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ceilometer-central/blobs/sha256:08fad4a9f449c4a3be062addbb554562e3445b584a69c7dcbc2d322db57ff6f3\\\": context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" Nov 22 07:42:49 crc kubenswrapper[4858]: I1122 07:42:49.781715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0a74856-97e8-4850-8b13-4fc1a6523ae6","Type":"ContainerStarted","Data":"4f2053569e8d4f528ed8d5bac1d5bf3ad51613208839a9e2c576015655b85a63"} Nov 22 07:42:49 crc kubenswrapper[4858]: I1122 07:42:49.782307 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="proxy-httpd" containerID="cri-o://4f2053569e8d4f528ed8d5bac1d5bf3ad51613208839a9e2c576015655b85a63" gracePeriod=30 Nov 22 07:42:49 crc kubenswrapper[4858]: I1122 07:42:49.782801 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="ceilometer-notification-agent" containerID="cri-o://ae8cac7ee6fb9bf5ba53845a562b7108b049035825a439784784314c390b08ae" gracePeriod=30 Nov 22 07:42:50 crc kubenswrapper[4858]: I1122 07:42:50.536622 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:42:50 crc kubenswrapper[4858]: E1122 07:42:50.537507 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:42:50 crc kubenswrapper[4858]: I1122 07:42:50.795190 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerID="4f2053569e8d4f528ed8d5bac1d5bf3ad51613208839a9e2c576015655b85a63" exitCode=0 Nov 22 07:42:50 crc kubenswrapper[4858]: I1122 07:42:50.795275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0a74856-97e8-4850-8b13-4fc1a6523ae6","Type":"ContainerDied","Data":"4f2053569e8d4f528ed8d5bac1d5bf3ad51613208839a9e2c576015655b85a63"} Nov 22 07:42:51 crc kubenswrapper[4858]: I1122 07:42:51.808641 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerID="ae8cac7ee6fb9bf5ba53845a562b7108b049035825a439784784314c390b08ae" exitCode=0 Nov 22 07:42:51 crc kubenswrapper[4858]: I1122 07:42:51.808710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0a74856-97e8-4850-8b13-4fc1a6523ae6","Type":"ContainerDied","Data":"ae8cac7ee6fb9bf5ba53845a562b7108b049035825a439784784314c390b08ae"} Nov 22 07:42:51 crc kubenswrapper[4858]: I1122 07:42:51.836972 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:42:51 crc kubenswrapper[4858]: I1122 07:42:51.982539 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005488 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jcwq\" (UniqueName: \"kubernetes.io/projected/d0a74856-97e8-4850-8b13-4fc1a6523ae6-kube-api-access-6jcwq\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-log-httpd\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005711 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-scripts\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-run-httpd\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005819 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-combined-ca-bundle\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-sg-core-conf-yaml\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.005972 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-config-data\") pod \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\" (UID: \"d0a74856-97e8-4850-8b13-4fc1a6523ae6\") " Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.006180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.007436 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.011222 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.021256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.024078 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a74856-97e8-4850-8b13-4fc1a6523ae6-kube-api-access-6jcwq" (OuterVolumeSpecName: "kube-api-access-6jcwq") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "kube-api-access-6jcwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.034117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-scripts" (OuterVolumeSpecName: "scripts") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.082386 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.103656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-config-data" (OuterVolumeSpecName: "config-data") pod "d0a74856-97e8-4850-8b13-4fc1a6523ae6" (UID: "d0a74856-97e8-4850-8b13-4fc1a6523ae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.108765 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0a74856-97e8-4850-8b13-4fc1a6523ae6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.108801 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.108813 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.108826 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.108836 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a74856-97e8-4850-8b13-4fc1a6523ae6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.108846 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jcwq\" (UniqueName: \"kubernetes.io/projected/d0a74856-97e8-4850-8b13-4fc1a6523ae6-kube-api-access-6jcwq\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.808811 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 07:42:52 crc kubenswrapper[4858]: E1122 07:42:52.809533 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="ceilometer-notification-agent" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.809550 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="ceilometer-notification-agent" Nov 22 07:42:52 crc kubenswrapper[4858]: E1122 07:42:52.809562 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="proxy-httpd" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.809568 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="proxy-httpd" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.809739 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="ceilometer-notification-agent" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.809749 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" containerName="proxy-httpd" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.810348 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.814376 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.814417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.814695 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2q7cc" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.821653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.821731 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.821892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvnq4\" (UniqueName: \"kubernetes.io/projected/9ca29960-de06-4140-aba1-6f9279722ffe-kube-api-access-pvnq4\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.821924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.822488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0a74856-97e8-4850-8b13-4fc1a6523ae6","Type":"ContainerDied","Data":"4cd58ddc497f67537d111c9661db6514b5a697ca48414500df4e753e2ee4a6ce"} Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.822544 4858 scope.go:117] "RemoveContainer" containerID="4f2053569e8d4f528ed8d5bac1d5bf3ad51613208839a9e2c576015655b85a63" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.822749 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.825602 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.848604 4858 scope.go:117] "RemoveContainer" containerID="ae8cac7ee6fb9bf5ba53845a562b7108b049035825a439784784314c390b08ae" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.924118 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvnq4\" (UniqueName: \"kubernetes.io/projected/9ca29960-de06-4140-aba1-6f9279722ffe-kube-api-access-pvnq4\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.924181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.924228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.924287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.925468 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.950969 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.954802 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config-secret\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.957918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvnq4\" (UniqueName: \"kubernetes.io/projected/9ca29960-de06-4140-aba1-6f9279722ffe-kube-api-access-pvnq4\") pod \"openstackclient\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " pod="openstack/openstackclient" Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.980770 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.987275 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:42:52 crc kubenswrapper[4858]: I1122 07:42:52.997030 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.001371 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.004897 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.005264 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.007865 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.028716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nqd\" (UniqueName: \"kubernetes.io/projected/40642fc4-b20e-4668-b90e-2a878617bd0d-kube-api-access-m5nqd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.028808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.028841 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.028979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-run-httpd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.029121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-log-httpd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.029182 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-config-data\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.029279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-scripts\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-config-data\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-scripts\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nqd\" (UniqueName: \"kubernetes.io/projected/40642fc4-b20e-4668-b90e-2a878617bd0d-kube-api-access-m5nqd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130717 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-run-httpd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.130766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-log-httpd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.131263 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-log-httpd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.132175 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-run-httpd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.136172 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.136504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.137152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-scripts\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.137286 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-config-data\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.141950 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.165402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nqd\" (UniqueName: \"kubernetes.io/projected/40642fc4-b20e-4668-b90e-2a878617bd0d-kube-api-access-m5nqd\") pod \"ceilometer-0\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.346478 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.574766 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a74856-97e8-4850-8b13-4fc1a6523ae6" path="/var/lib/kubelet/pods/d0a74856-97e8-4850-8b13-4fc1a6523ae6/volumes" Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.708198 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:42:53 crc kubenswrapper[4858]: I1122 07:42:53.832471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9ca29960-de06-4140-aba1-6f9279722ffe","Type":"ContainerStarted","Data":"87320c60cd7c73ba6ddd621f98de549b24f2d197c8958ffbd31abd59266ffde1"} Nov 22 07:42:54 crc kubenswrapper[4858]: I1122 07:42:54.016380 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:42:54 crc kubenswrapper[4858]: I1122 07:42:54.847170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerStarted","Data":"c404d034966a0a33a3c3aa820e049e8ea53812986c23dbf6e1a26eed83ba23bf"} Nov 22 07:43:02 crc kubenswrapper[4858]: I1122 07:43:02.536166 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:43:02 crc kubenswrapper[4858]: E1122 07:43:02.537822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:43:04 crc kubenswrapper[4858]: I1122 07:43:04.959984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerStarted","Data":"da84b4e609f18222e5c34791887c8082903a80611816a69049c72e5802ccb288"} Nov 22 07:43:05 crc kubenswrapper[4858]: I1122 07:43:05.973992 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9ca29960-de06-4140-aba1-6f9279722ffe","Type":"ContainerStarted","Data":"ebba2b48d81a716f8564ac05f4e67094d58682f34fbb3831a1e66df63c9f2817"} Nov 22 07:43:05 crc kubenswrapper[4858]: I1122 07:43:05.977290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerStarted","Data":"d711e66e9f1dfc6d0d71a8a447a3820a58f34e4d166c7be5ca1024c22f171b16"} Nov 22 07:43:09 crc kubenswrapper[4858]: I1122 07:43:09.014087 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerStarted","Data":"4b4f88b5c5dc72f5997aa6bf4d1924e8314f49caa0209ff02d8563fe2b3847ba"} Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.341581 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=6.838943228 podStartE2EDuration="18.341546723s" podCreationTimestamp="2025-11-22 07:42:52 +0000 UTC" firstStartedPulling="2025-11-22 07:42:53.726079241 +0000 UTC m=+1935.567502247" lastFinishedPulling="2025-11-22 07:43:05.228682736 +0000 UTC m=+1947.070105742" observedRunningTime="2025-11-22 07:43:05.998569306 +0000 UTC m=+1947.839992312" watchObservedRunningTime="2025-11-22 07:43:10.341546723 +0000 UTC m=+1952.182969729" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.347132 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6547bffc85-6ngjc"] Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.349561 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.354463 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.354877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.355851 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.381262 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6547bffc85-6ngjc"] Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.518307 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-config-data\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.518465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-combined-ca-bundle\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.518521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-log-httpd\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.518819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-etc-swift\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.519103 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-run-httpd\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.519574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkbt\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-kube-api-access-gzkbt\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.519704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-public-tls-certs\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.519729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-internal-tls-certs\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkbt\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-kube-api-access-gzkbt\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-public-tls-certs\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-internal-tls-certs\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-config-data\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-combined-ca-bundle\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-log-httpd\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-etc-swift\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.621808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-run-httpd\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.622434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-run-httpd\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.623583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-log-httpd\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.630037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-internal-tls-certs\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.630894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-combined-ca-bundle\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.631520 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-config-data\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.632723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-etc-swift\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.644114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-public-tls-certs\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.655191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkbt\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-kube-api-access-gzkbt\") pod \"swift-proxy-6547bffc85-6ngjc\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:10 crc kubenswrapper[4858]: I1122 07:43:10.673244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:11 crc kubenswrapper[4858]: I1122 07:43:11.379787 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6547bffc85-6ngjc"] Nov 22 07:43:12 crc kubenswrapper[4858]: I1122 07:43:12.057671 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6547bffc85-6ngjc" event={"ID":"679c2346-5f5a-450e-b40d-1d371f1f8447","Type":"ContainerStarted","Data":"cb8e0b98ed5e42a2ea2c44a584ed1ed9bd02541400dbfafb35c9de9fb15addfb"} Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.071505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6547bffc85-6ngjc" event={"ID":"679c2346-5f5a-450e-b40d-1d371f1f8447","Type":"ContainerStarted","Data":"eb00d0789abf04eee5762b9ee56aabc63f0f1c94ae705447bb35180a0e8b87ca"} Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.072668 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.073215 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6547bffc85-6ngjc" event={"ID":"679c2346-5f5a-450e-b40d-1d371f1f8447","Type":"ContainerStarted","Data":"9a6d75795ae8232c4383f35b0f34c6eed669006f1c3713b8cda81bf151289e3c"} Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.074973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerStarted","Data":"ebefdc3e084ed9f2e5333f116e3db3354bbdb4109fb79a5838a328ff08cca8c1"} Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.075184 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.134378 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.458976187 podStartE2EDuration="21.134354099s" podCreationTimestamp="2025-11-22 07:42:52 +0000 UTC" firstStartedPulling="2025-11-22 07:42:54.012048781 +0000 UTC m=+1935.853471797" lastFinishedPulling="2025-11-22 07:43:11.687426703 +0000 UTC m=+1953.528849709" observedRunningTime="2025-11-22 07:43:13.134240975 +0000 UTC m=+1954.975663981" watchObservedRunningTime="2025-11-22 07:43:13.134354099 +0000 UTC m=+1954.975777115" Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.140559 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6547bffc85-6ngjc" podStartSLOduration=3.140537146 podStartE2EDuration="3.140537146s" podCreationTimestamp="2025-11-22 07:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:13.105687371 +0000 UTC m=+1954.947110387" watchObservedRunningTime="2025-11-22 07:43:13.140537146 +0000 UTC m=+1954.981960152" Nov 22 07:43:13 crc kubenswrapper[4858]: I1122 07:43:13.537297 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:43:13 crc kubenswrapper[4858]: E1122 07:43:13.538054 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:43:14 crc kubenswrapper[4858]: I1122 07:43:14.112618 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:20 crc kubenswrapper[4858]: I1122 07:43:20.685670 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:20 crc kubenswrapper[4858]: I1122 07:43:20.687580 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:43:23 crc kubenswrapper[4858]: I1122 07:43:23.352288 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:43:27 crc kubenswrapper[4858]: I1122 07:43:27.536747 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:43:28 crc kubenswrapper[4858]: I1122 07:43:28.251154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"f8caeb1a403d03d8804bfa487bf29539e11f1f2a11d9543c3192f5b713edaba0"} Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.026719 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.028151 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-central-agent" containerID="cri-o://da84b4e609f18222e5c34791887c8082903a80611816a69049c72e5802ccb288" gracePeriod=30 Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.029405 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-notification-agent" containerID="cri-o://d711e66e9f1dfc6d0d71a8a447a3820a58f34e4d166c7be5ca1024c22f171b16" gracePeriod=30 Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.029558 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="sg-core" containerID="cri-o://4b4f88b5c5dc72f5997aa6bf4d1924e8314f49caa0209ff02d8563fe2b3847ba" gracePeriod=30 Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.029792 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="proxy-httpd" containerID="cri-o://ebefdc3e084ed9f2e5333f116e3db3354bbdb4109fb79a5838a328ff08cca8c1" gracePeriod=30 Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.276106 4858 generic.go:334] "Generic (PLEG): container finished" podID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerID="4b4f88b5c5dc72f5997aa6bf4d1924e8314f49caa0209ff02d8563fe2b3847ba" exitCode=2 Nov 22 07:43:30 crc kubenswrapper[4858]: I1122 07:43:30.276158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerDied","Data":"4b4f88b5c5dc72f5997aa6bf4d1924e8314f49caa0209ff02d8563fe2b3847ba"} Nov 22 07:43:31 crc kubenswrapper[4858]: I1122 07:43:31.305532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerDied","Data":"ebefdc3e084ed9f2e5333f116e3db3354bbdb4109fb79a5838a328ff08cca8c1"} Nov 22 07:43:31 crc kubenswrapper[4858]: I1122 07:43:31.305474 4858 generic.go:334] "Generic (PLEG): container finished" podID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerID="ebefdc3e084ed9f2e5333f116e3db3354bbdb4109fb79a5838a328ff08cca8c1" exitCode=0 Nov 22 07:43:31 crc kubenswrapper[4858]: I1122 07:43:31.306377 4858 generic.go:334] "Generic (PLEG): container finished" podID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerID="da84b4e609f18222e5c34791887c8082903a80611816a69049c72e5802ccb288" exitCode=0 Nov 22 07:43:31 crc kubenswrapper[4858]: I1122 07:43:31.306444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerDied","Data":"da84b4e609f18222e5c34791887c8082903a80611816a69049c72e5802ccb288"} Nov 22 07:43:32 crc kubenswrapper[4858]: I1122 07:43:32.318877 4858 generic.go:334] "Generic (PLEG): container finished" podID="d41812ee-66ac-438e-82b5-cb404aa95294" containerID="f20acacb794a33f3c4580766d27a38e6353236383e5589415e8e4d4c9d95c565" exitCode=0 Nov 22 07:43:32 crc kubenswrapper[4858]: I1122 07:43:32.318966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rkx92" event={"ID":"d41812ee-66ac-438e-82b5-cb404aa95294","Type":"ContainerDied","Data":"f20acacb794a33f3c4580766d27a38e6353236383e5589415e8e4d4c9d95c565"} Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.804753 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rkx92" Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.948002 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41812ee-66ac-438e-82b5-cb404aa95294-logs\") pod \"d41812ee-66ac-438e-82b5-cb404aa95294\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.948078 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-scripts\") pod \"d41812ee-66ac-438e-82b5-cb404aa95294\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.948114 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-config-data\") pod \"d41812ee-66ac-438e-82b5-cb404aa95294\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.948151 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-combined-ca-bundle\") pod \"d41812ee-66ac-438e-82b5-cb404aa95294\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.948264 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4w8l\" (UniqueName: \"kubernetes.io/projected/d41812ee-66ac-438e-82b5-cb404aa95294-kube-api-access-v4w8l\") pod \"d41812ee-66ac-438e-82b5-cb404aa95294\" (UID: \"d41812ee-66ac-438e-82b5-cb404aa95294\") " Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.948635 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d41812ee-66ac-438e-82b5-cb404aa95294-logs" (OuterVolumeSpecName: "logs") pod "d41812ee-66ac-438e-82b5-cb404aa95294" (UID: "d41812ee-66ac-438e-82b5-cb404aa95294"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.949159 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41812ee-66ac-438e-82b5-cb404aa95294-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.959838 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-scripts" (OuterVolumeSpecName: "scripts") pod "d41812ee-66ac-438e-82b5-cb404aa95294" (UID: "d41812ee-66ac-438e-82b5-cb404aa95294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.960443 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41812ee-66ac-438e-82b5-cb404aa95294-kube-api-access-v4w8l" (OuterVolumeSpecName: "kube-api-access-v4w8l") pod "d41812ee-66ac-438e-82b5-cb404aa95294" (UID: "d41812ee-66ac-438e-82b5-cb404aa95294"). InnerVolumeSpecName "kube-api-access-v4w8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.983026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d41812ee-66ac-438e-82b5-cb404aa95294" (UID: "d41812ee-66ac-438e-82b5-cb404aa95294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:33 crc kubenswrapper[4858]: I1122 07:43:33.983898 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-config-data" (OuterVolumeSpecName: "config-data") pod "d41812ee-66ac-438e-82b5-cb404aa95294" (UID: "d41812ee-66ac-438e-82b5-cb404aa95294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.050440 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.050479 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.050493 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41812ee-66ac-438e-82b5-cb404aa95294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.050506 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4w8l\" (UniqueName: \"kubernetes.io/projected/d41812ee-66ac-438e-82b5-cb404aa95294-kube-api-access-v4w8l\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.344696 4858 generic.go:334] "Generic (PLEG): container finished" podID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerID="d711e66e9f1dfc6d0d71a8a447a3820a58f34e4d166c7be5ca1024c22f171b16" exitCode=0 Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.344789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerDied","Data":"d711e66e9f1dfc6d0d71a8a447a3820a58f34e4d166c7be5ca1024c22f171b16"} Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.349828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rkx92" event={"ID":"d41812ee-66ac-438e-82b5-cb404aa95294","Type":"ContainerDied","Data":"c6d7ddc53b844ccdd079ecadeb7db4530eb9d19c7915ecefe1f3f45f3ed9e287"} Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.349878 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d7ddc53b844ccdd079ecadeb7db4530eb9d19c7915ecefe1f3f45f3ed9e287" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.349938 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rkx92" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.520965 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f787bd646-rhtm4"] Nov 22 07:43:34 crc kubenswrapper[4858]: E1122 07:43:34.522192 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41812ee-66ac-438e-82b5-cb404aa95294" containerName="placement-db-sync" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.522216 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41812ee-66ac-438e-82b5-cb404aa95294" containerName="placement-db-sync" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.522489 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41812ee-66ac-438e-82b5-cb404aa95294" containerName="placement-db-sync" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.532008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.540602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2rw2b" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.540888 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.541069 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.540604 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.548838 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.554834 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f787bd646-rhtm4"] Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.666880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-scripts\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.666953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-public-tls-certs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.667192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-combined-ca-bundle\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.667293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbzc\" (UniqueName: \"kubernetes.io/projected/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-kube-api-access-skbzc\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.667483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-config-data\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.668084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-internal-tls-certs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.668188 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-logs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.746695 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-scripts\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-public-tls-certs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-combined-ca-bundle\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbzc\" (UniqueName: \"kubernetes.io/projected/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-kube-api-access-skbzc\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-config-data\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-internal-tls-certs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.770980 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-logs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.771617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-logs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.777914 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-public-tls-certs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.781594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-internal-tls-certs\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.785397 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-combined-ca-bundle\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.791830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-config-data\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.792707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-scripts\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.802094 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbzc\" (UniqueName: \"kubernetes.io/projected/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-kube-api-access-skbzc\") pod \"placement-f787bd646-rhtm4\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.878607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-sg-core-conf-yaml\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.878745 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nqd\" (UniqueName: \"kubernetes.io/projected/40642fc4-b20e-4668-b90e-2a878617bd0d-kube-api-access-m5nqd\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.878863 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-log-httpd\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.878973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-run-httpd\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.879000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-combined-ca-bundle\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.879050 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-config-data\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.879106 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-scripts\") pod \"40642fc4-b20e-4668-b90e-2a878617bd0d\" (UID: \"40642fc4-b20e-4668-b90e-2a878617bd0d\") " Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.883497 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.884199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.893125 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.903702 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-scripts" (OuterVolumeSpecName: "scripts") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.926417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40642fc4-b20e-4668-b90e-2a878617bd0d-kube-api-access-m5nqd" (OuterVolumeSpecName: "kube-api-access-m5nqd") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "kube-api-access-m5nqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.984640 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5nqd\" (UniqueName: \"kubernetes.io/projected/40642fc4-b20e-4668-b90e-2a878617bd0d-kube-api-access-m5nqd\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.984696 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.984711 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40642fc4-b20e-4668-b90e-2a878617bd0d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:34 crc kubenswrapper[4858]: I1122 07:43:34.984723 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.019510 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.135588 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.168854 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.173437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-config-data" (OuterVolumeSpecName: "config-data") pod "40642fc4-b20e-4668-b90e-2a878617bd0d" (UID: "40642fc4-b20e-4668-b90e-2a878617bd0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.239970 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.240021 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40642fc4-b20e-4668-b90e-2a878617bd0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.370719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40642fc4-b20e-4668-b90e-2a878617bd0d","Type":"ContainerDied","Data":"c404d034966a0a33a3c3aa820e049e8ea53812986c23dbf6e1a26eed83ba23bf"} Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.370791 4858 scope.go:117] "RemoveContainer" containerID="ebefdc3e084ed9f2e5333f116e3db3354bbdb4109fb79a5838a328ff08cca8c1" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.370851 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.404890 4858 scope.go:117] "RemoveContainer" containerID="4b4f88b5c5dc72f5997aa6bf4d1924e8314f49caa0209ff02d8563fe2b3847ba" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.434247 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.459069 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.468176 4858 scope.go:117] "RemoveContainer" containerID="d711e66e9f1dfc6d0d71a8a447a3820a58f34e4d166c7be5ca1024c22f171b16" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.473706 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:35 crc kubenswrapper[4858]: E1122 07:43:35.474366 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="sg-core" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474393 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="sg-core" Nov 22 07:43:35 crc kubenswrapper[4858]: E1122 07:43:35.474423 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="proxy-httpd" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474433 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="proxy-httpd" Nov 22 07:43:35 crc kubenswrapper[4858]: E1122 07:43:35.474464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-central-agent" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474473 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-central-agent" Nov 22 07:43:35 crc kubenswrapper[4858]: E1122 07:43:35.474486 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-notification-agent" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474493 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-notification-agent" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474836 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="proxy-httpd" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474856 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-notification-agent" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474869 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="ceilometer-central-agent" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.474881 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" containerName="sg-core" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.477022 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.488178 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.488451 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.488546 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.549036 4858 scope.go:117] "RemoveContainer" containerID="da84b4e609f18222e5c34791887c8082903a80611816a69049c72e5802ccb288" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-log-httpd\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmvq\" (UniqueName: \"kubernetes.io/projected/51625658-4507-4c2e-9a45-26ff0718bd44-kube-api-access-brmvq\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-run-httpd\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564309 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564590 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-scripts\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.564702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.600697 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40642fc4-b20e-4668-b90e-2a878617bd0d" path="/var/lib/kubelet/pods/40642fc4-b20e-4668-b90e-2a878617bd0d/volumes" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.637966 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f787bd646-rhtm4"] Nov 22 07:43:35 crc kubenswrapper[4858]: W1122 07:43:35.641202 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d4e5cb5_ebc0_4cec_a53e_452efc26731b.slice/crio-82a9380b3a5201558768985ea0218f597eb7595a493365cfc9d75e5ed84cb7c0 WatchSource:0}: Error finding container 82a9380b3a5201558768985ea0218f597eb7595a493365cfc9d75e5ed84cb7c0: Status 404 returned error can't find the container with id 82a9380b3a5201558768985ea0218f597eb7595a493365cfc9d75e5ed84cb7c0 Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-scripts\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666555 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-log-httpd\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmvq\" (UniqueName: \"kubernetes.io/projected/51625658-4507-4c2e-9a45-26ff0718bd44-kube-api-access-brmvq\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666624 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-run-httpd\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666699 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.666728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.668634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-log-httpd\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.668670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-run-httpd\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.675244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-scripts\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.675305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.677359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.679212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.695218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmvq\" (UniqueName: \"kubernetes.io/projected/51625658-4507-4c2e-9a45-26ff0718bd44-kube-api-access-brmvq\") pod \"ceilometer-0\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " pod="openstack/ceilometer-0" Nov 22 07:43:35 crc kubenswrapper[4858]: I1122 07:43:35.826683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:43:36 crc kubenswrapper[4858]: I1122 07:43:36.404503 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:36 crc kubenswrapper[4858]: I1122 07:43:36.416835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f787bd646-rhtm4" event={"ID":"4d4e5cb5-ebc0-4cec-a53e-452efc26731b","Type":"ContainerStarted","Data":"df637c4bab3b1c089c9ad8726c02b0cd45f173fc27bc1d9048018902900124ab"} Nov 22 07:43:36 crc kubenswrapper[4858]: I1122 07:43:36.417343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f787bd646-rhtm4" event={"ID":"4d4e5cb5-ebc0-4cec-a53e-452efc26731b","Type":"ContainerStarted","Data":"82a9380b3a5201558768985ea0218f597eb7595a493365cfc9d75e5ed84cb7c0"} Nov 22 07:43:36 crc kubenswrapper[4858]: W1122 07:43:36.418622 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51625658_4507_4c2e_9a45_26ff0718bd44.slice/crio-f9ee36a28320441ec72a184cf1a75ce752839462b2a4087caabb0a8e4cbb0828 WatchSource:0}: Error finding container f9ee36a28320441ec72a184cf1a75ce752839462b2a4087caabb0a8e4cbb0828: Status 404 returned error can't find the container with id f9ee36a28320441ec72a184cf1a75ce752839462b2a4087caabb0a8e4cbb0828 Nov 22 07:43:37 crc kubenswrapper[4858]: I1122 07:43:37.428085 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerStarted","Data":"f9ee36a28320441ec72a184cf1a75ce752839462b2a4087caabb0a8e4cbb0828"} Nov 22 07:43:37 crc kubenswrapper[4858]: I1122 07:43:37.430569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f787bd646-rhtm4" event={"ID":"4d4e5cb5-ebc0-4cec-a53e-452efc26731b","Type":"ContainerStarted","Data":"e3acbe684a3b1cf56d9ce339047e865b4bf5f7e2b06b06679ba47e5ef77b37e7"} Nov 22 07:43:37 crc kubenswrapper[4858]: I1122 07:43:37.430720 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:37 crc kubenswrapper[4858]: I1122 07:43:37.461030 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f787bd646-rhtm4" podStartSLOduration=3.461005105 podStartE2EDuration="3.461005105s" podCreationTimestamp="2025-11-22 07:43:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:37.458607328 +0000 UTC m=+1979.300030354" watchObservedRunningTime="2025-11-22 07:43:37.461005105 +0000 UTC m=+1979.302428111" Nov 22 07:43:38 crc kubenswrapper[4858]: I1122 07:43:38.486188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerStarted","Data":"35730efbe27572ee3c17cce02b15de85ffad4d0730d344bbea99ff1443a74918"} Nov 22 07:43:38 crc kubenswrapper[4858]: I1122 07:43:38.486577 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:43:39 crc kubenswrapper[4858]: I1122 07:43:39.504204 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerStarted","Data":"ddde1c52caa82e56c239626a2d2ca389f6803b1b018fce673d7703cca8d7efa4"} Nov 22 07:43:41 crc kubenswrapper[4858]: I1122 07:43:41.532126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerStarted","Data":"45c8fb1d759982854bb7fe975a6194a27cbad96b147c98915ee92c44bd9d577a"} Nov 22 07:43:43 crc kubenswrapper[4858]: I1122 07:43:43.554988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerStarted","Data":"f3617a054edc86d77357d86bf813f2c9836bcad34a77c4c41ef2401d3ea3f0d9"} Nov 22 07:43:43 crc kubenswrapper[4858]: I1122 07:43:43.556520 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:43:43 crc kubenswrapper[4858]: I1122 07:43:43.584487 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.459041293 podStartE2EDuration="8.584465293s" podCreationTimestamp="2025-11-22 07:43:35 +0000 UTC" firstStartedPulling="2025-11-22 07:43:36.423334788 +0000 UTC m=+1978.264757794" lastFinishedPulling="2025-11-22 07:43:42.548758798 +0000 UTC m=+1984.390181794" observedRunningTime="2025-11-22 07:43:43.580448894 +0000 UTC m=+1985.421871920" watchObservedRunningTime="2025-11-22 07:43:43.584465293 +0000 UTC m=+1985.425888299" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.405728 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ct822"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.407977 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.444550 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ct822"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.507916 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-567gq"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.510806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.511774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1b01b-da13-4121-8257-60e0fbbca27c-operator-scripts\") pod \"nova-api-db-create-ct822\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.511915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22s4\" (UniqueName: \"kubernetes.io/projected/b5e1b01b-da13-4121-8257-60e0fbbca27c-kube-api-access-c22s4\") pod \"nova-api-db-create-ct822\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.603135 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-567gq"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.614173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c22s4\" (UniqueName: \"kubernetes.io/projected/b5e1b01b-da13-4121-8257-60e0fbbca27c-kube-api-access-c22s4\") pod \"nova-api-db-create-ct822\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.614392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbzx\" (UniqueName: \"kubernetes.io/projected/f11e22df-3691-484e-a21d-906038a0eea8-kube-api-access-4vbzx\") pod \"nova-cell0-db-create-567gq\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.614477 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f11e22df-3691-484e-a21d-906038a0eea8-operator-scripts\") pod \"nova-cell0-db-create-567gq\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.614525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1b01b-da13-4121-8257-60e0fbbca27c-operator-scripts\") pod \"nova-api-db-create-ct822\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.615866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1b01b-da13-4121-8257-60e0fbbca27c-operator-scripts\") pod \"nova-api-db-create-ct822\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.680027 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c22s4\" (UniqueName: \"kubernetes.io/projected/b5e1b01b-da13-4121-8257-60e0fbbca27c-kube-api-access-c22s4\") pod \"nova-api-db-create-ct822\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.719412 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbzx\" (UniqueName: \"kubernetes.io/projected/f11e22df-3691-484e-a21d-906038a0eea8-kube-api-access-4vbzx\") pod \"nova-cell0-db-create-567gq\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.719507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f11e22df-3691-484e-a21d-906038a0eea8-operator-scripts\") pod \"nova-cell0-db-create-567gq\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.720361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f11e22df-3691-484e-a21d-906038a0eea8-operator-scripts\") pod \"nova-cell0-db-create-567gq\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.746070 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.761029 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xgbvx"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.762567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.794519 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d4cc-account-create-4qfzg"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.795873 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.797753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbzx\" (UniqueName: \"kubernetes.io/projected/f11e22df-3691-484e-a21d-906038a0eea8-kube-api-access-4vbzx\") pod \"nova-cell0-db-create-567gq\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.801814 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.833095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.835432 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xgbvx"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.926119 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d4cc-account-create-4qfzg"] Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.953536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzgt9\" (UniqueName: \"kubernetes.io/projected/557eea09-096b-40be-8182-638ffcaa230e-kube-api-access-bzgt9\") pod \"nova-cell1-db-create-xgbvx\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.953726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557eea09-096b-40be-8182-638ffcaa230e-operator-scripts\") pod \"nova-cell1-db-create-xgbvx\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.953818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znx8g\" (UniqueName: \"kubernetes.io/projected/632b42f8-37dd-4569-87f0-a7a6f9a802f0-kube-api-access-znx8g\") pod \"nova-api-d4cc-account-create-4qfzg\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:46 crc kubenswrapper[4858]: I1122 07:43:46.959752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632b42f8-37dd-4569-87f0-a7a6f9a802f0-operator-scripts\") pod \"nova-api-d4cc-account-create-4qfzg\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.033438 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4e0e-account-create-nqlr6"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.034991 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.040750 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.052712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e0e-account-create-nqlr6"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.062950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzgt9\" (UniqueName: \"kubernetes.io/projected/557eea09-096b-40be-8182-638ffcaa230e-kube-api-access-bzgt9\") pod \"nova-cell1-db-create-xgbvx\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.063092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557eea09-096b-40be-8182-638ffcaa230e-operator-scripts\") pod \"nova-cell1-db-create-xgbvx\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.064155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557eea09-096b-40be-8182-638ffcaa230e-operator-scripts\") pod \"nova-cell1-db-create-xgbvx\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.064270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znx8g\" (UniqueName: \"kubernetes.io/projected/632b42f8-37dd-4569-87f0-a7a6f9a802f0-kube-api-access-znx8g\") pod \"nova-api-d4cc-account-create-4qfzg\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.065302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8558e5a0-abd5-4634-82b1-dfd995b12ace-operator-scripts\") pod \"nova-cell0-4e0e-account-create-nqlr6\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.065496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632b42f8-37dd-4569-87f0-a7a6f9a802f0-operator-scripts\") pod \"nova-api-d4cc-account-create-4qfzg\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.066570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632b42f8-37dd-4569-87f0-a7a6f9a802f0-operator-scripts\") pod \"nova-api-d4cc-account-create-4qfzg\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.066771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrkg2\" (UniqueName: \"kubernetes.io/projected/8558e5a0-abd5-4634-82b1-dfd995b12ace-kube-api-access-mrkg2\") pod \"nova-cell0-4e0e-account-create-nqlr6\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.097157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzgt9\" (UniqueName: \"kubernetes.io/projected/557eea09-096b-40be-8182-638ffcaa230e-kube-api-access-bzgt9\") pod \"nova-cell1-db-create-xgbvx\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.102939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znx8g\" (UniqueName: \"kubernetes.io/projected/632b42f8-37dd-4569-87f0-a7a6f9a802f0-kube-api-access-znx8g\") pod \"nova-api-d4cc-account-create-4qfzg\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.159892 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1968-account-create-h6ffw"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.161807 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.166009 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.168260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8558e5a0-abd5-4634-82b1-dfd995b12ace-operator-scripts\") pod \"nova-cell0-4e0e-account-create-nqlr6\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.168473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzbz\" (UniqueName: \"kubernetes.io/projected/890e1296-50c3-4f46-8359-08d3210fb46d-kube-api-access-5rzbz\") pod \"nova-cell1-1968-account-create-h6ffw\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.168621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/890e1296-50c3-4f46-8359-08d3210fb46d-operator-scripts\") pod \"nova-cell1-1968-account-create-h6ffw\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.168745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrkg2\" (UniqueName: \"kubernetes.io/projected/8558e5a0-abd5-4634-82b1-dfd995b12ace-kube-api-access-mrkg2\") pod \"nova-cell0-4e0e-account-create-nqlr6\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.170340 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8558e5a0-abd5-4634-82b1-dfd995b12ace-operator-scripts\") pod \"nova-cell0-4e0e-account-create-nqlr6\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.177468 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1968-account-create-h6ffw"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.200104 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrkg2\" (UniqueName: \"kubernetes.io/projected/8558e5a0-abd5-4634-82b1-dfd995b12ace-kube-api-access-mrkg2\") pod \"nova-cell0-4e0e-account-create-nqlr6\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.271015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzbz\" (UniqueName: \"kubernetes.io/projected/890e1296-50c3-4f46-8359-08d3210fb46d-kube-api-access-5rzbz\") pod \"nova-cell1-1968-account-create-h6ffw\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.271364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/890e1296-50c3-4f46-8359-08d3210fb46d-operator-scripts\") pod \"nova-cell1-1968-account-create-h6ffw\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.275798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/890e1296-50c3-4f46-8359-08d3210fb46d-operator-scripts\") pod \"nova-cell1-1968-account-create-h6ffw\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.289461 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.294802 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzbz\" (UniqueName: \"kubernetes.io/projected/890e1296-50c3-4f46-8359-08d3210fb46d-kube-api-access-5rzbz\") pod \"nova-cell1-1968-account-create-h6ffw\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.307158 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.358470 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.399677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-567gq"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.512452 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.566664 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ct822"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.627900 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-567gq" event={"ID":"f11e22df-3691-484e-a21d-906038a0eea8","Type":"ContainerStarted","Data":"333bdbfd8ccce27f0c685bf102b35bf7693f9853743fcce4c96ebce0ecb26953"} Nov 22 07:43:47 crc kubenswrapper[4858]: W1122 07:43:47.638247 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e1b01b_da13_4121_8257_60e0fbbca27c.slice/crio-4367ffcc6a783418eb981cff0514eec2b384020d4716cd5a3003adc5ba2f48a8 WatchSource:0}: Error finding container 4367ffcc6a783418eb981cff0514eec2b384020d4716cd5a3003adc5ba2f48a8: Status 404 returned error can't find the container with id 4367ffcc6a783418eb981cff0514eec2b384020d4716cd5a3003adc5ba2f48a8 Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.932347 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xgbvx"] Nov 22 07:43:47 crc kubenswrapper[4858]: I1122 07:43:47.973956 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d4cc-account-create-4qfzg"] Nov 22 07:43:47 crc kubenswrapper[4858]: W1122 07:43:47.974705 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod632b42f8_37dd_4569_87f0_a7a6f9a802f0.slice/crio-9030b41b77096faf1e692b5a70b350b530b2679a49acb45908e20293c9fd8528 WatchSource:0}: Error finding container 9030b41b77096faf1e692b5a70b350b530b2679a49acb45908e20293c9fd8528: Status 404 returned error can't find the container with id 9030b41b77096faf1e692b5a70b350b530b2679a49acb45908e20293c9fd8528 Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.127921 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e0e-account-create-nqlr6"] Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.248260 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1968-account-create-h6ffw"] Nov 22 07:43:48 crc kubenswrapper[4858]: W1122 07:43:48.267973 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod890e1296_50c3_4f46_8359_08d3210fb46d.slice/crio-50f9185dbb0662c1e58b37eeb258dc3255d4b633bb6f5efb5f8c9b2a238f22e4 WatchSource:0}: Error finding container 50f9185dbb0662c1e58b37eeb258dc3255d4b633bb6f5efb5f8c9b2a238f22e4: Status 404 returned error can't find the container with id 50f9185dbb0662c1e58b37eeb258dc3255d4b633bb6f5efb5f8c9b2a238f22e4 Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.641734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ct822" event={"ID":"b5e1b01b-da13-4121-8257-60e0fbbca27c","Type":"ContainerStarted","Data":"f6520daed76b5e870bfc8aa2ee1122860ae7b6539407e7359bd9ae7e3a45b1f7"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.642240 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ct822" event={"ID":"b5e1b01b-da13-4121-8257-60e0fbbca27c","Type":"ContainerStarted","Data":"4367ffcc6a783418eb981cff0514eec2b384020d4716cd5a3003adc5ba2f48a8"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.647274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgbvx" event={"ID":"557eea09-096b-40be-8182-638ffcaa230e","Type":"ContainerStarted","Data":"7353a4588a60ee3d4c43c007a2286febfa005d0de82d84253ef99191853f4d20"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.647905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgbvx" event={"ID":"557eea09-096b-40be-8182-638ffcaa230e","Type":"ContainerStarted","Data":"38764773919e78e71fb8c80dce75757aac011ec1c67a654574b59db018b64020"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.653492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1968-account-create-h6ffw" event={"ID":"890e1296-50c3-4f46-8359-08d3210fb46d","Type":"ContainerStarted","Data":"50f9185dbb0662c1e58b37eeb258dc3255d4b633bb6f5efb5f8c9b2a238f22e4"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.657354 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4cc-account-create-4qfzg" event={"ID":"632b42f8-37dd-4569-87f0-a7a6f9a802f0","Type":"ContainerStarted","Data":"1dad77e690ec2f712aff447348a272321482ef5f3173abeb7fe25907d4dc4a72"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.657405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4cc-account-create-4qfzg" event={"ID":"632b42f8-37dd-4569-87f0-a7a6f9a802f0","Type":"ContainerStarted","Data":"9030b41b77096faf1e692b5a70b350b530b2679a49acb45908e20293c9fd8528"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.663559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" event={"ID":"8558e5a0-abd5-4634-82b1-dfd995b12ace","Type":"ContainerStarted","Data":"111dcca46bd3fcaad0968661bca007da7afb43901445452bb8c21debc1e1efb9"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.663609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" event={"ID":"8558e5a0-abd5-4634-82b1-dfd995b12ace","Type":"ContainerStarted","Data":"801b138c1ebac52b472756ccf919b189515bd0b7f56ae13f12abe907ab3247fd"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.668689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-567gq" event={"ID":"f11e22df-3691-484e-a21d-906038a0eea8","Type":"ContainerStarted","Data":"ed31a8de8ebda973678facde6b66275df75c33a364706e494d6a7d07aab991ea"} Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.670566 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-ct822" podStartSLOduration=2.67047181 podStartE2EDuration="2.67047181s" podCreationTimestamp="2025-11-22 07:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:48.666754132 +0000 UTC m=+1990.508177148" watchObservedRunningTime="2025-11-22 07:43:48.67047181 +0000 UTC m=+1990.511894816" Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.702798 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xgbvx" podStartSLOduration=2.7027316040000002 podStartE2EDuration="2.702731604s" podCreationTimestamp="2025-11-22 07:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:48.696957009 +0000 UTC m=+1990.538380015" watchObservedRunningTime="2025-11-22 07:43:48.702731604 +0000 UTC m=+1990.544154640" Nov 22 07:43:48 crc kubenswrapper[4858]: I1122 07:43:48.721896 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-567gq" podStartSLOduration=2.721870557 podStartE2EDuration="2.721870557s" podCreationTimestamp="2025-11-22 07:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:48.719673997 +0000 UTC m=+1990.561097003" watchObservedRunningTime="2025-11-22 07:43:48.721870557 +0000 UTC m=+1990.563293563" Nov 22 07:43:49 crc kubenswrapper[4858]: I1122 07:43:49.682404 4858 generic.go:334] "Generic (PLEG): container finished" podID="557eea09-096b-40be-8182-638ffcaa230e" containerID="7353a4588a60ee3d4c43c007a2286febfa005d0de82d84253ef99191853f4d20" exitCode=0 Nov 22 07:43:49 crc kubenswrapper[4858]: I1122 07:43:49.682485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgbvx" event={"ID":"557eea09-096b-40be-8182-638ffcaa230e","Type":"ContainerDied","Data":"7353a4588a60ee3d4c43c007a2286febfa005d0de82d84253ef99191853f4d20"} Nov 22 07:43:49 crc kubenswrapper[4858]: I1122 07:43:49.684363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1968-account-create-h6ffw" event={"ID":"890e1296-50c3-4f46-8359-08d3210fb46d","Type":"ContainerStarted","Data":"3f5ad003ed82a4b8e9cedea83c84f2a30c9a4de0fec0a69fc9fdc9a61424e182"} Nov 22 07:43:49 crc kubenswrapper[4858]: I1122 07:43:49.726049 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-1968-account-create-h6ffw" podStartSLOduration=2.7260275099999998 podStartE2EDuration="2.72602751s" podCreationTimestamp="2025-11-22 07:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:49.720131281 +0000 UTC m=+1991.561554287" watchObservedRunningTime="2025-11-22 07:43:49.72602751 +0000 UTC m=+1991.567450516" Nov 22 07:43:49 crc kubenswrapper[4858]: I1122 07:43:49.749279 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-d4cc-account-create-4qfzg" podStartSLOduration=3.749254744 podStartE2EDuration="3.749254744s" podCreationTimestamp="2025-11-22 07:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:49.739786901 +0000 UTC m=+1991.581209907" watchObservedRunningTime="2025-11-22 07:43:49.749254744 +0000 UTC m=+1991.590677750" Nov 22 07:43:49 crc kubenswrapper[4858]: I1122 07:43:49.775689 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" podStartSLOduration=2.775659311 podStartE2EDuration="2.775659311s" podCreationTimestamp="2025-11-22 07:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:49.761950151 +0000 UTC m=+1991.603373177" watchObservedRunningTime="2025-11-22 07:43:49.775659311 +0000 UTC m=+1991.617082317" Nov 22 07:43:50 crc kubenswrapper[4858]: I1122 07:43:50.701179 4858 generic.go:334] "Generic (PLEG): container finished" podID="890e1296-50c3-4f46-8359-08d3210fb46d" containerID="3f5ad003ed82a4b8e9cedea83c84f2a30c9a4de0fec0a69fc9fdc9a61424e182" exitCode=0 Nov 22 07:43:50 crc kubenswrapper[4858]: I1122 07:43:50.701433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1968-account-create-h6ffw" event={"ID":"890e1296-50c3-4f46-8359-08d3210fb46d","Type":"ContainerDied","Data":"3f5ad003ed82a4b8e9cedea83c84f2a30c9a4de0fec0a69fc9fdc9a61424e182"} Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.117097 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.181170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557eea09-096b-40be-8182-638ffcaa230e-operator-scripts\") pod \"557eea09-096b-40be-8182-638ffcaa230e\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.181411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzgt9\" (UniqueName: \"kubernetes.io/projected/557eea09-096b-40be-8182-638ffcaa230e-kube-api-access-bzgt9\") pod \"557eea09-096b-40be-8182-638ffcaa230e\" (UID: \"557eea09-096b-40be-8182-638ffcaa230e\") " Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.182170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557eea09-096b-40be-8182-638ffcaa230e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "557eea09-096b-40be-8182-638ffcaa230e" (UID: "557eea09-096b-40be-8182-638ffcaa230e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.182668 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557eea09-096b-40be-8182-638ffcaa230e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.202060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557eea09-096b-40be-8182-638ffcaa230e-kube-api-access-bzgt9" (OuterVolumeSpecName: "kube-api-access-bzgt9") pod "557eea09-096b-40be-8182-638ffcaa230e" (UID: "557eea09-096b-40be-8182-638ffcaa230e"). InnerVolumeSpecName "kube-api-access-bzgt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.284993 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzgt9\" (UniqueName: \"kubernetes.io/projected/557eea09-096b-40be-8182-638ffcaa230e-kube-api-access-bzgt9\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.717897 4858 generic.go:334] "Generic (PLEG): container finished" podID="f11e22df-3691-484e-a21d-906038a0eea8" containerID="ed31a8de8ebda973678facde6b66275df75c33a364706e494d6a7d07aab991ea" exitCode=0 Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.717984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-567gq" event={"ID":"f11e22df-3691-484e-a21d-906038a0eea8","Type":"ContainerDied","Data":"ed31a8de8ebda973678facde6b66275df75c33a364706e494d6a7d07aab991ea"} Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.722152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xgbvx" event={"ID":"557eea09-096b-40be-8182-638ffcaa230e","Type":"ContainerDied","Data":"38764773919e78e71fb8c80dce75757aac011ec1c67a654574b59db018b64020"} Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.722245 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38764773919e78e71fb8c80dce75757aac011ec1c67a654574b59db018b64020" Nov 22 07:43:51 crc kubenswrapper[4858]: I1122 07:43:51.722188 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xgbvx" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.124574 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.205062 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rzbz\" (UniqueName: \"kubernetes.io/projected/890e1296-50c3-4f46-8359-08d3210fb46d-kube-api-access-5rzbz\") pod \"890e1296-50c3-4f46-8359-08d3210fb46d\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.205396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/890e1296-50c3-4f46-8359-08d3210fb46d-operator-scripts\") pod \"890e1296-50c3-4f46-8359-08d3210fb46d\" (UID: \"890e1296-50c3-4f46-8359-08d3210fb46d\") " Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.206549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890e1296-50c3-4f46-8359-08d3210fb46d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "890e1296-50c3-4f46-8359-08d3210fb46d" (UID: "890e1296-50c3-4f46-8359-08d3210fb46d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.216208 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890e1296-50c3-4f46-8359-08d3210fb46d-kube-api-access-5rzbz" (OuterVolumeSpecName: "kube-api-access-5rzbz") pod "890e1296-50c3-4f46-8359-08d3210fb46d" (UID: "890e1296-50c3-4f46-8359-08d3210fb46d"). InnerVolumeSpecName "kube-api-access-5rzbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.309104 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/890e1296-50c3-4f46-8359-08d3210fb46d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.309699 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rzbz\" (UniqueName: \"kubernetes.io/projected/890e1296-50c3-4f46-8359-08d3210fb46d-kube-api-access-5rzbz\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.742160 4858 generic.go:334] "Generic (PLEG): container finished" podID="632b42f8-37dd-4569-87f0-a7a6f9a802f0" containerID="1dad77e690ec2f712aff447348a272321482ef5f3173abeb7fe25907d4dc4a72" exitCode=0 Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.742278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4cc-account-create-4qfzg" event={"ID":"632b42f8-37dd-4569-87f0-a7a6f9a802f0","Type":"ContainerDied","Data":"1dad77e690ec2f712aff447348a272321482ef5f3173abeb7fe25907d4dc4a72"} Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.745631 4858 generic.go:334] "Generic (PLEG): container finished" podID="8558e5a0-abd5-4634-82b1-dfd995b12ace" containerID="111dcca46bd3fcaad0968661bca007da7afb43901445452bb8c21debc1e1efb9" exitCode=0 Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.745769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" event={"ID":"8558e5a0-abd5-4634-82b1-dfd995b12ace","Type":"ContainerDied","Data":"111dcca46bd3fcaad0968661bca007da7afb43901445452bb8c21debc1e1efb9"} Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.751290 4858 generic.go:334] "Generic (PLEG): container finished" podID="b5e1b01b-da13-4121-8257-60e0fbbca27c" containerID="f6520daed76b5e870bfc8aa2ee1122860ae7b6539407e7359bd9ae7e3a45b1f7" exitCode=0 Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.751815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ct822" event={"ID":"b5e1b01b-da13-4121-8257-60e0fbbca27c","Type":"ContainerDied","Data":"f6520daed76b5e870bfc8aa2ee1122860ae7b6539407e7359bd9ae7e3a45b1f7"} Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.754909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1968-account-create-h6ffw" event={"ID":"890e1296-50c3-4f46-8359-08d3210fb46d","Type":"ContainerDied","Data":"50f9185dbb0662c1e58b37eeb258dc3255d4b633bb6f5efb5f8c9b2a238f22e4"} Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.754947 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f9185dbb0662c1e58b37eeb258dc3255d4b633bb6f5efb5f8c9b2a238f22e4" Nov 22 07:43:52 crc kubenswrapper[4858]: I1122 07:43:52.755019 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1968-account-create-h6ffw" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.126589 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.235649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vbzx\" (UniqueName: \"kubernetes.io/projected/f11e22df-3691-484e-a21d-906038a0eea8-kube-api-access-4vbzx\") pod \"f11e22df-3691-484e-a21d-906038a0eea8\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.235913 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f11e22df-3691-484e-a21d-906038a0eea8-operator-scripts\") pod \"f11e22df-3691-484e-a21d-906038a0eea8\" (UID: \"f11e22df-3691-484e-a21d-906038a0eea8\") " Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.236722 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11e22df-3691-484e-a21d-906038a0eea8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f11e22df-3691-484e-a21d-906038a0eea8" (UID: "f11e22df-3691-484e-a21d-906038a0eea8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.241592 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11e22df-3691-484e-a21d-906038a0eea8-kube-api-access-4vbzx" (OuterVolumeSpecName: "kube-api-access-4vbzx") pod "f11e22df-3691-484e-a21d-906038a0eea8" (UID: "f11e22df-3691-484e-a21d-906038a0eea8"). InnerVolumeSpecName "kube-api-access-4vbzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.339335 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vbzx\" (UniqueName: \"kubernetes.io/projected/f11e22df-3691-484e-a21d-906038a0eea8-kube-api-access-4vbzx\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.339394 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f11e22df-3691-484e-a21d-906038a0eea8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.769496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-567gq" event={"ID":"f11e22df-3691-484e-a21d-906038a0eea8","Type":"ContainerDied","Data":"333bdbfd8ccce27f0c685bf102b35bf7693f9853743fcce4c96ebce0ecb26953"} Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.770041 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333bdbfd8ccce27f0c685bf102b35bf7693f9853743fcce4c96ebce0ecb26953" Nov 22 07:43:53 crc kubenswrapper[4858]: I1122 07:43:53.769753 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-567gq" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.305473 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.323840 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.341505 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.361439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c22s4\" (UniqueName: \"kubernetes.io/projected/b5e1b01b-da13-4121-8257-60e0fbbca27c-kube-api-access-c22s4\") pod \"b5e1b01b-da13-4121-8257-60e0fbbca27c\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.361518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znx8g\" (UniqueName: \"kubernetes.io/projected/632b42f8-37dd-4569-87f0-a7a6f9a802f0-kube-api-access-znx8g\") pod \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.361651 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632b42f8-37dd-4569-87f0-a7a6f9a802f0-operator-scripts\") pod \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\" (UID: \"632b42f8-37dd-4569-87f0-a7a6f9a802f0\") " Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.361685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1b01b-da13-4121-8257-60e0fbbca27c-operator-scripts\") pod \"b5e1b01b-da13-4121-8257-60e0fbbca27c\" (UID: \"b5e1b01b-da13-4121-8257-60e0fbbca27c\") " Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.361757 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrkg2\" (UniqueName: \"kubernetes.io/projected/8558e5a0-abd5-4634-82b1-dfd995b12ace-kube-api-access-mrkg2\") pod \"8558e5a0-abd5-4634-82b1-dfd995b12ace\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.361786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8558e5a0-abd5-4634-82b1-dfd995b12ace-operator-scripts\") pod \"8558e5a0-abd5-4634-82b1-dfd995b12ace\" (UID: \"8558e5a0-abd5-4634-82b1-dfd995b12ace\") " Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.363215 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/632b42f8-37dd-4569-87f0-a7a6f9a802f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "632b42f8-37dd-4569-87f0-a7a6f9a802f0" (UID: "632b42f8-37dd-4569-87f0-a7a6f9a802f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.366945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8558e5a0-abd5-4634-82b1-dfd995b12ace-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8558e5a0-abd5-4634-82b1-dfd995b12ace" (UID: "8558e5a0-abd5-4634-82b1-dfd995b12ace"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.367114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e1b01b-da13-4121-8257-60e0fbbca27c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5e1b01b-da13-4121-8257-60e0fbbca27c" (UID: "b5e1b01b-da13-4121-8257-60e0fbbca27c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.369358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632b42f8-37dd-4569-87f0-a7a6f9a802f0-kube-api-access-znx8g" (OuterVolumeSpecName: "kube-api-access-znx8g") pod "632b42f8-37dd-4569-87f0-a7a6f9a802f0" (UID: "632b42f8-37dd-4569-87f0-a7a6f9a802f0"). InnerVolumeSpecName "kube-api-access-znx8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.369468 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8558e5a0-abd5-4634-82b1-dfd995b12ace-kube-api-access-mrkg2" (OuterVolumeSpecName: "kube-api-access-mrkg2") pod "8558e5a0-abd5-4634-82b1-dfd995b12ace" (UID: "8558e5a0-abd5-4634-82b1-dfd995b12ace"). InnerVolumeSpecName "kube-api-access-mrkg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.386922 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e1b01b-da13-4121-8257-60e0fbbca27c-kube-api-access-c22s4" (OuterVolumeSpecName: "kube-api-access-c22s4") pod "b5e1b01b-da13-4121-8257-60e0fbbca27c" (UID: "b5e1b01b-da13-4121-8257-60e0fbbca27c"). InnerVolumeSpecName "kube-api-access-c22s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.463643 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrkg2\" (UniqueName: \"kubernetes.io/projected/8558e5a0-abd5-4634-82b1-dfd995b12ace-kube-api-access-mrkg2\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.463722 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8558e5a0-abd5-4634-82b1-dfd995b12ace-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.463732 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c22s4\" (UniqueName: \"kubernetes.io/projected/b5e1b01b-da13-4121-8257-60e0fbbca27c-kube-api-access-c22s4\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.463745 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znx8g\" (UniqueName: \"kubernetes.io/projected/632b42f8-37dd-4569-87f0-a7a6f9a802f0-kube-api-access-znx8g\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.463754 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/632b42f8-37dd-4569-87f0-a7a6f9a802f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.463764 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5e1b01b-da13-4121-8257-60e0fbbca27c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.790135 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ct822" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.790174 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ct822" event={"ID":"b5e1b01b-da13-4121-8257-60e0fbbca27c","Type":"ContainerDied","Data":"4367ffcc6a783418eb981cff0514eec2b384020d4716cd5a3003adc5ba2f48a8"} Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.790253 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4367ffcc6a783418eb981cff0514eec2b384020d4716cd5a3003adc5ba2f48a8" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.794016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4cc-account-create-4qfzg" event={"ID":"632b42f8-37dd-4569-87f0-a7a6f9a802f0","Type":"ContainerDied","Data":"9030b41b77096faf1e692b5a70b350b530b2679a49acb45908e20293c9fd8528"} Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.794217 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9030b41b77096faf1e692b5a70b350b530b2679a49acb45908e20293c9fd8528" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.794082 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4cc-account-create-4qfzg" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.797756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" event={"ID":"8558e5a0-abd5-4634-82b1-dfd995b12ace","Type":"ContainerDied","Data":"801b138c1ebac52b472756ccf919b189515bd0b7f56ae13f12abe907ab3247fd"} Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.797825 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="801b138c1ebac52b472756ccf919b189515bd0b7f56ae13f12abe907ab3247fd" Nov 22 07:43:54 crc kubenswrapper[4858]: I1122 07:43:54.797900 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e0e-account-create-nqlr6" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.351374 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hrts7"] Nov 22 07:43:57 crc kubenswrapper[4858]: E1122 07:43:57.352676 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8558e5a0-abd5-4634-82b1-dfd995b12ace" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.352727 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8558e5a0-abd5-4634-82b1-dfd995b12ace" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: E1122 07:43:57.352746 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11e22df-3691-484e-a21d-906038a0eea8" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.352755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11e22df-3691-484e-a21d-906038a0eea8" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: E1122 07:43:57.352764 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632b42f8-37dd-4569-87f0-a7a6f9a802f0" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.352771 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="632b42f8-37dd-4569-87f0-a7a6f9a802f0" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: E1122 07:43:57.352800 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e1b01b-da13-4121-8257-60e0fbbca27c" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.352808 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e1b01b-da13-4121-8257-60e0fbbca27c" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: E1122 07:43:57.352816 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557eea09-096b-40be-8182-638ffcaa230e" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.352823 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="557eea09-096b-40be-8182-638ffcaa230e" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: E1122 07:43:57.352845 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890e1296-50c3-4f46-8359-08d3210fb46d" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.352852 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="890e1296-50c3-4f46-8359-08d3210fb46d" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.353057 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8558e5a0-abd5-4634-82b1-dfd995b12ace" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.353071 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="557eea09-096b-40be-8182-638ffcaa230e" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.353087 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11e22df-3691-484e-a21d-906038a0eea8" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.353097 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="632b42f8-37dd-4569-87f0-a7a6f9a802f0" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.353108 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="890e1296-50c3-4f46-8359-08d3210fb46d" containerName="mariadb-account-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.353120 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e1b01b-da13-4121-8257-60e0fbbca27c" containerName="mariadb-database-create" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.354023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.357072 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.357365 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fdmdn" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.358593 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.375436 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hrts7"] Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.423143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-config-data\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.423504 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658vw\" (UniqueName: \"kubernetes.io/projected/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-kube-api-access-658vw\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.423689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-scripts\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.424039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.525606 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.525705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-config-data\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.525741 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658vw\" (UniqueName: \"kubernetes.io/projected/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-kube-api-access-658vw\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.525783 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-scripts\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.533357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-scripts\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.533718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-config-data\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.535658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.551162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658vw\" (UniqueName: \"kubernetes.io/projected/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-kube-api-access-658vw\") pod \"nova-cell0-conductor-db-sync-hrts7\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:57 crc kubenswrapper[4858]: I1122 07:43:57.692244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:43:58 crc kubenswrapper[4858]: I1122 07:43:58.206959 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hrts7"] Nov 22 07:43:58 crc kubenswrapper[4858]: I1122 07:43:58.853735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hrts7" event={"ID":"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08","Type":"ContainerStarted","Data":"0e6166f99e07936e5c0ed0616cf026cf3c8f5c380756d5a7c93a4cc0744170d3"} Nov 22 07:44:04 crc kubenswrapper[4858]: I1122 07:44:04.920256 4858 generic.go:334] "Generic (PLEG): container finished" podID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" containerID="52223809a6d6bfb7225e42121de5c27970a68606da724fbdc5f05682783c72f0" exitCode=0 Nov 22 07:44:04 crc kubenswrapper[4858]: I1122 07:44:04.920837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmpsm" event={"ID":"854da42b-c1a7-4390-91cf-2fa7fa3e8eab","Type":"ContainerDied","Data":"52223809a6d6bfb7225e42121de5c27970a68606da724fbdc5f05682783c72f0"} Nov 22 07:44:05 crc kubenswrapper[4858]: I1122 07:44:05.834102 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.062611 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.063372 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-central-agent" containerID="cri-o://35730efbe27572ee3c17cce02b15de85ffad4d0730d344bbea99ff1443a74918" gracePeriod=30 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.063464 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="sg-core" containerID="cri-o://45c8fb1d759982854bb7fe975a6194a27cbad96b147c98915ee92c44bd9d577a" gracePeriod=30 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.063530 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-notification-agent" containerID="cri-o://ddde1c52caa82e56c239626a2d2ca389f6803b1b018fce673d7703cca8d7efa4" gracePeriod=30 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.063548 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="proxy-httpd" containerID="cri-o://f3617a054edc86d77357d86bf813f2c9836bcad34a77c4c41ef2401d3ea3f0d9" gracePeriod=30 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.615502 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.772351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhwsg\" (UniqueName: \"kubernetes.io/projected/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-kube-api-access-vhwsg\") pod \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.772606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-db-sync-config-data\") pod \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.772762 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-combined-ca-bundle\") pod \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\" (UID: \"854da42b-c1a7-4390-91cf-2fa7fa3e8eab\") " Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.780690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "854da42b-c1a7-4390-91cf-2fa7fa3e8eab" (UID: "854da42b-c1a7-4390-91cf-2fa7fa3e8eab"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.780901 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-kube-api-access-vhwsg" (OuterVolumeSpecName: "kube-api-access-vhwsg") pod "854da42b-c1a7-4390-91cf-2fa7fa3e8eab" (UID: "854da42b-c1a7-4390-91cf-2fa7fa3e8eab"). InnerVolumeSpecName "kube-api-access-vhwsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.814729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "854da42b-c1a7-4390-91cf-2fa7fa3e8eab" (UID: "854da42b-c1a7-4390-91cf-2fa7fa3e8eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.876832 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhwsg\" (UniqueName: \"kubernetes.io/projected/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-kube-api-access-vhwsg\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.876874 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.876887 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854da42b-c1a7-4390-91cf-2fa7fa3e8eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.970270 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dmpsm" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.970260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dmpsm" event={"ID":"854da42b-c1a7-4390-91cf-2fa7fa3e8eab","Type":"ContainerDied","Data":"c4f4bcd9c12b95cad57ec8980f43b2386b6a282ffd95f74b02a56e0761a6ed99"} Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.970344 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f4bcd9c12b95cad57ec8980f43b2386b6a282ffd95f74b02a56e0761a6ed99" Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.973953 4858 generic.go:334] "Generic (PLEG): container finished" podID="51625658-4507-4c2e-9a45-26ff0718bd44" containerID="f3617a054edc86d77357d86bf813f2c9836bcad34a77c4c41ef2401d3ea3f0d9" exitCode=0 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.974002 4858 generic.go:334] "Generic (PLEG): container finished" podID="51625658-4507-4c2e-9a45-26ff0718bd44" containerID="45c8fb1d759982854bb7fe975a6194a27cbad96b147c98915ee92c44bd9d577a" exitCode=2 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.974013 4858 generic.go:334] "Generic (PLEG): container finished" podID="51625658-4507-4c2e-9a45-26ff0718bd44" containerID="35730efbe27572ee3c17cce02b15de85ffad4d0730d344bbea99ff1443a74918" exitCode=0 Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.974024 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerDied","Data":"f3617a054edc86d77357d86bf813f2c9836bcad34a77c4c41ef2401d3ea3f0d9"} Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.974105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerDied","Data":"45c8fb1d759982854bb7fe975a6194a27cbad96b147c98915ee92c44bd9d577a"} Nov 22 07:44:08 crc kubenswrapper[4858]: I1122 07:44:08.974122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerDied","Data":"35730efbe27572ee3c17cce02b15de85ffad4d0730d344bbea99ff1443a74918"} Nov 22 07:44:09 crc kubenswrapper[4858]: E1122 07:44:09.055117 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51625658_4507_4c2e_9a45_26ff0718bd44.slice/crio-conmon-35730efbe27572ee3c17cce02b15de85ffad4d0730d344bbea99ff1443a74918.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod854da42b_c1a7_4390_91cf_2fa7fa3e8eab.slice/crio-c4f4bcd9c12b95cad57ec8980f43b2386b6a282ffd95f74b02a56e0761a6ed99\": RecentStats: unable to find data in memory cache]" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.071762 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57cdc95956-lbjhn"] Nov 22 07:44:10 crc kubenswrapper[4858]: E1122 07:44:10.072463 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" containerName="barbican-db-sync" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.072485 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" containerName="barbican-db-sync" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.072694 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" containerName="barbican-db-sync" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.073947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.077958 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x9z2x" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.078770 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.079794 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.120082 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57cdc95956-lbjhn"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.164745 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6948d6454f-5zfp7"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.178010 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.209285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-combined-ca-bundle\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.209571 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.209776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d1b1fd-682c-499c-8f5b-f22d4513217a-logs\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.210203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data-custom\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.210239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74h2w\" (UniqueName: \"kubernetes.io/projected/04d1b1fd-682c-499c-8f5b-f22d4513217a-kube-api-access-74h2w\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.216204 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6948d6454f-5zfp7"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.247912 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5744c7f6cf-flhrq"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.250012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.257811 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.265936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5744c7f6cf-flhrq"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.311946 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-config\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-svc\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data-custom\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74h2w\" (UniqueName: \"kubernetes.io/projected/04d1b1fd-682c-499c-8f5b-f22d4513217a-kube-api-access-74h2w\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-sb\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312286 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-combined-ca-bundle\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-nb\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55wq\" (UniqueName: \"kubernetes.io/projected/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-kube-api-access-q55wq\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-swift-storage-0\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.312440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d1b1fd-682c-499c-8f5b-f22d4513217a-logs\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.313111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d1b1fd-682c-499c-8f5b-f22d4513217a-logs\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.329358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.329434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data-custom\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.340251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-combined-ca-bundle\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.352013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74h2w\" (UniqueName: \"kubernetes.io/projected/04d1b1fd-682c-499c-8f5b-f22d4513217a-kube-api-access-74h2w\") pod \"barbican-keystone-listener-57cdc95956-lbjhn\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.395551 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7749fbdfd4-nfjpp"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.398143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.405575 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.410625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.413938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-nb\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.414111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q55wq\" (UniqueName: \"kubernetes.io/projected/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-kube-api-access-q55wq\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.414143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-combined-ca-bundle\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.414172 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.414206 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-swift-storage-0\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.414247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data-custom\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.414311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-config\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.415443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-svc\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.415467 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa777a2-4dd0-407d-b615-34d7fcd0845b-logs\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.415546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-sb\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.415557 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-nb\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.415589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8bl\" (UniqueName: \"kubernetes.io/projected/eaa777a2-4dd0-407d-b615-34d7fcd0845b-kube-api-access-dn8bl\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.416056 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-swift-storage-0\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.416607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-sb\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.419789 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-svc\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.420815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-config\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.426800 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7749fbdfd4-nfjpp"] Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.474707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55wq\" (UniqueName: \"kubernetes.io/projected/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-kube-api-access-q55wq\") pod \"dnsmasq-dns-6948d6454f-5zfp7\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.515884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.516804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8bl\" (UniqueName: \"kubernetes.io/projected/eaa777a2-4dd0-407d-b615-34d7fcd0845b-kube-api-access-dn8bl\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.516866 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856eefbb-a12a-4459-ac1c-9c54a222e2e7-logs\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.516917 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhllw\" (UniqueName: \"kubernetes.io/projected/856eefbb-a12a-4459-ac1c-9c54a222e2e7-kube-api-access-dhllw\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.516943 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-combined-ca-bundle\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.516966 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.517002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.517029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data-custom\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.517071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-combined-ca-bundle\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.517113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data-custom\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.517166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa777a2-4dd0-407d-b615-34d7fcd0845b-logs\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.517688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa777a2-4dd0-407d-b615-34d7fcd0845b-logs\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.523291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data-custom\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.529667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-combined-ca-bundle\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.532661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.543334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8bl\" (UniqueName: \"kubernetes.io/projected/eaa777a2-4dd0-407d-b615-34d7fcd0845b-kube-api-access-dn8bl\") pod \"barbican-worker-5744c7f6cf-flhrq\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.576171 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.619044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856eefbb-a12a-4459-ac1c-9c54a222e2e7-logs\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.619121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhllw\" (UniqueName: \"kubernetes.io/projected/856eefbb-a12a-4459-ac1c-9c54a222e2e7-kube-api-access-dhllw\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.619160 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.619198 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-combined-ca-bundle\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.619224 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data-custom\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.620429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856eefbb-a12a-4459-ac1c-9c54a222e2e7-logs\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.624746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.625346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data-custom\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.627352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-combined-ca-bundle\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.647845 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhllw\" (UniqueName: \"kubernetes.io/projected/856eefbb-a12a-4459-ac1c-9c54a222e2e7-kube-api-access-dhllw\") pod \"barbican-api-7749fbdfd4-nfjpp\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:10 crc kubenswrapper[4858]: I1122 07:44:10.828809 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:11 crc kubenswrapper[4858]: I1122 07:44:11.441033 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57cdc95956-lbjhn"] Nov 22 07:44:11 crc kubenswrapper[4858]: I1122 07:44:11.526400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6948d6454f-5zfp7"] Nov 22 07:44:11 crc kubenswrapper[4858]: I1122 07:44:11.637884 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7749fbdfd4-nfjpp"] Nov 22 07:44:11 crc kubenswrapper[4858]: I1122 07:44:11.649799 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5744c7f6cf-flhrq"] Nov 22 07:44:12 crc kubenswrapper[4858]: I1122 07:44:12.047528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" event={"ID":"faf4098e-38a9-4ccf-bd60-7eccc9c294b0","Type":"ContainerStarted","Data":"89576361d8617a5c7049151cc3e281797d00272243202a711381c8612b5ea910"} Nov 22 07:44:12 crc kubenswrapper[4858]: I1122 07:44:12.050735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" event={"ID":"04d1b1fd-682c-499c-8f5b-f22d4513217a","Type":"ContainerStarted","Data":"fb6a9d4d58deb91d9d36b843716d0349a508180b346b3c085291d8fc93c19c49"} Nov 22 07:44:13 crc kubenswrapper[4858]: W1122 07:44:13.098202 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod856eefbb_a12a_4459_ac1c_9c54a222e2e7.slice/crio-1f261594090e80fc22628126767077a84c56772234b1d58e178a713ea3a15dd6 WatchSource:0}: Error finding container 1f261594090e80fc22628126767077a84c56772234b1d58e178a713ea3a15dd6: Status 404 returned error can't find the container with id 1f261594090e80fc22628126767077a84c56772234b1d58e178a713ea3a15dd6 Nov 22 07:44:13 crc kubenswrapper[4858]: W1122 07:44:13.099662 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa777a2_4dd0_407d_b615_34d7fcd0845b.slice/crio-88949e6f1477e795fd131a83eb1924c3b44b3b497885b52a591771ba1a3d48f3 WatchSource:0}: Error finding container 88949e6f1477e795fd131a83eb1924c3b44b3b497885b52a591771ba1a3d48f3: Status 404 returned error can't find the container with id 88949e6f1477e795fd131a83eb1924c3b44b3b497885b52a591771ba1a3d48f3 Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.734032 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-964b97968-m9n7r"] Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.735989 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.739025 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.741575 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.744744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-combined-ca-bundle\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.744842 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-public-tls-certs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.744875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data-custom\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.744900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnlr\" (UniqueName: \"kubernetes.io/projected/d27a55dc-71d3-468f-b503-8436883c2771-kube-api-access-zqnlr\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.744930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.744984 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a55dc-71d3-468f-b503-8436883c2771-logs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.745034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-internal-tls-certs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.761789 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-964b97968-m9n7r"] Nov 22 07:44:13 crc kubenswrapper[4858]: E1122 07:44:13.823866 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:cd3cf7a34053e850b4d4f9f4ea4c74953a54a42fd18e47d7c01d44a88923e925" Nov 22 07:44:13 crc kubenswrapper[4858]: E1122 07:44:13.824106 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:cd3cf7a34053e850b4d4f9f4ea4c74953a54a42fd18e47d7c01d44a88923e925,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-658vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-hrts7_openstack(f5712f6e-4ef2-4de1-9093-5fa00d6a1d08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:44:13 crc kubenswrapper[4858]: E1122 07:44:13.825556 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-hrts7" podUID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.847839 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-public-tls-certs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.847934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data-custom\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.847963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnlr\" (UniqueName: \"kubernetes.io/projected/d27a55dc-71d3-468f-b503-8436883c2771-kube-api-access-zqnlr\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.848023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.848111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a55dc-71d3-468f-b503-8436883c2771-logs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.848185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-internal-tls-certs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.848343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-combined-ca-bundle\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.849569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a55dc-71d3-468f-b503-8436883c2771-logs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.861351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-combined-ca-bundle\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.861456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data-custom\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.862876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.869304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-public-tls-certs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.872424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-internal-tls-certs\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:13 crc kubenswrapper[4858]: I1122 07:44:13.872911 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnlr\" (UniqueName: \"kubernetes.io/projected/d27a55dc-71d3-468f-b503-8436883c2771-kube-api-access-zqnlr\") pod \"barbican-api-964b97968-m9n7r\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.062504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.101522 4858 generic.go:334] "Generic (PLEG): container finished" podID="51625658-4507-4c2e-9a45-26ff0718bd44" containerID="ddde1c52caa82e56c239626a2d2ca389f6803b1b018fce673d7703cca8d7efa4" exitCode=0 Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.101717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerDied","Data":"ddde1c52caa82e56c239626a2d2ca389f6803b1b018fce673d7703cca8d7efa4"} Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.108565 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5744c7f6cf-flhrq" event={"ID":"eaa777a2-4dd0-407d-b615-34d7fcd0845b","Type":"ContainerStarted","Data":"88949e6f1477e795fd131a83eb1924c3b44b3b497885b52a591771ba1a3d48f3"} Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.114661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7749fbdfd4-nfjpp" event={"ID":"856eefbb-a12a-4459-ac1c-9c54a222e2e7","Type":"ContainerStarted","Data":"d4ef5730b65c22808bf234423220d0df97ffc893d3125fbb9c5491ef37c0888f"} Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.114958 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7749fbdfd4-nfjpp" event={"ID":"856eefbb-a12a-4459-ac1c-9c54a222e2e7","Type":"ContainerStarted","Data":"1f261594090e80fc22628126767077a84c56772234b1d58e178a713ea3a15dd6"} Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.122203 4858 generic.go:334] "Generic (PLEG): container finished" podID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerID="765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf" exitCode=0 Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.123710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" event={"ID":"faf4098e-38a9-4ccf-bd60-7eccc9c294b0","Type":"ContainerDied","Data":"765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf"} Nov 22 07:44:14 crc kubenswrapper[4858]: E1122 07:44:14.127654 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:cd3cf7a34053e850b4d4f9f4ea4c74953a54a42fd18e47d7c01d44a88923e925\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-hrts7" podUID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.289907 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.837052 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-964b97968-m9n7r"] Nov 22 07:44:14 crc kubenswrapper[4858]: I1122 07:44:14.865967 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.121450 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.142706 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" event={"ID":"faf4098e-38a9-4ccf-bd60-7eccc9c294b0","Type":"ContainerStarted","Data":"a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4"} Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.142906 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.151126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51625658-4507-4c2e-9a45-26ff0718bd44","Type":"ContainerDied","Data":"f9ee36a28320441ec72a184cf1a75ce752839462b2a4087caabb0a8e4cbb0828"} Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.151998 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.153622 4858 scope.go:117] "RemoveContainer" containerID="f3617a054edc86d77357d86bf813f2c9836bcad34a77c4c41ef2401d3ea3f0d9" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.158301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-964b97968-m9n7r" event={"ID":"d27a55dc-71d3-468f-b503-8436883c2771","Type":"ContainerStarted","Data":"e4462a426eb7ae015819af160f2e441ccb1c9c85055dd67bd973b741939f08f1"} Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.172350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7749fbdfd4-nfjpp" event={"ID":"856eefbb-a12a-4459-ac1c-9c54a222e2e7","Type":"ContainerStarted","Data":"097f9ae9b7d5ed099348c9ce35d8cf273488ec1139ef85284f8dd20007731c47"} Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.213216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmvq\" (UniqueName: \"kubernetes.io/projected/51625658-4507-4c2e-9a45-26ff0718bd44-kube-api-access-brmvq\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.214696 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.214889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-sg-core-conf-yaml\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.215020 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-run-httpd\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.215086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-combined-ca-bundle\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.215154 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-log-httpd\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.215226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-scripts\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.217773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.219849 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.232796 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-scripts" (OuterVolumeSpecName: "scripts") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.239378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51625658-4507-4c2e-9a45-26ff0718bd44-kube-api-access-brmvq" (OuterVolumeSpecName: "kube-api-access-brmvq") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "kube-api-access-brmvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.246478 4858 scope.go:117] "RemoveContainer" containerID="45c8fb1d759982854bb7fe975a6194a27cbad96b147c98915ee92c44bd9d577a" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.292405 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podStartSLOduration=5.292368122 podStartE2EDuration="5.292368122s" podCreationTimestamp="2025-11-22 07:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:15.26611637 +0000 UTC m=+2017.107539406" watchObservedRunningTime="2025-11-22 07:44:15.292368122 +0000 UTC m=+2017.133791138" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.294995 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" podStartSLOduration=5.294970845 podStartE2EDuration="5.294970845s" podCreationTimestamp="2025-11-22 07:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:15.226217923 +0000 UTC m=+2017.067640949" watchObservedRunningTime="2025-11-22 07:44:15.294970845 +0000 UTC m=+2017.136393851" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.322402 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brmvq\" (UniqueName: \"kubernetes.io/projected/51625658-4507-4c2e-9a45-26ff0718bd44-kube-api-access-brmvq\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.322462 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.322483 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51625658-4507-4c2e-9a45-26ff0718bd44-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.322495 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.355142 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.391959 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.422756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data" (OuterVolumeSpecName: "config-data") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.424733 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data\") pod \"51625658-4507-4c2e-9a45-26ff0718bd44\" (UID: \"51625658-4507-4c2e-9a45-26ff0718bd44\") " Nov 22 07:44:15 crc kubenswrapper[4858]: W1122 07:44:15.425834 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/51625658-4507-4c2e-9a45-26ff0718bd44/volumes/kubernetes.io~secret/config-data Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.425879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data" (OuterVolumeSpecName: "config-data") pod "51625658-4507-4c2e-9a45-26ff0718bd44" (UID: "51625658-4507-4c2e-9a45-26ff0718bd44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.426571 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.426630 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.426648 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51625658-4507-4c2e-9a45-26ff0718bd44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.509951 4858 scope.go:117] "RemoveContainer" containerID="ddde1c52caa82e56c239626a2d2ca389f6803b1b018fce673d7703cca8d7efa4" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.555779 4858 scope.go:117] "RemoveContainer" containerID="35730efbe27572ee3c17cce02b15de85ffad4d0730d344bbea99ff1443a74918" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.655160 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.667945 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.678060 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:15 crc kubenswrapper[4858]: E1122 07:44:15.678887 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-central-agent" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.678921 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-central-agent" Nov 22 07:44:15 crc kubenswrapper[4858]: E1122 07:44:15.678954 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="proxy-httpd" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.678962 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="proxy-httpd" Nov 22 07:44:15 crc kubenswrapper[4858]: E1122 07:44:15.678986 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="sg-core" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.678992 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="sg-core" Nov 22 07:44:15 crc kubenswrapper[4858]: E1122 07:44:15.679007 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-notification-agent" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.679013 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-notification-agent" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.679228 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-central-agent" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.679266 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="sg-core" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.679284 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="ceilometer-notification-agent" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.679292 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" containerName="proxy-httpd" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.682513 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.685057 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.687245 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.691756 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735505 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4wwz\" (UniqueName: \"kubernetes.io/projected/447fe270-cadb-41bb-95fc-04055c14b5db-kube-api-access-g4wwz\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-log-httpd\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-run-httpd\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-config-data\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.735673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-scripts\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.829907 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.830037 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.838185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4wwz\" (UniqueName: \"kubernetes.io/projected/447fe270-cadb-41bb-95fc-04055c14b5db-kube-api-access-g4wwz\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.838266 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-log-httpd\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.839046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-log-httpd\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.839483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-run-httpd\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.839161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-run-httpd\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.839708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-config-data\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.840645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.840693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-scripts\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.840812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.848512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.850040 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-config-data\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.853269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-scripts\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.856038 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:15 crc kubenswrapper[4858]: I1122 07:44:15.862561 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4wwz\" (UniqueName: \"kubernetes.io/projected/447fe270-cadb-41bb-95fc-04055c14b5db-kube-api-access-g4wwz\") pod \"ceilometer-0\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " pod="openstack/ceilometer-0" Nov 22 07:44:16 crc kubenswrapper[4858]: I1122 07:44:16.017548 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:16 crc kubenswrapper[4858]: I1122 07:44:16.199923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-964b97968-m9n7r" event={"ID":"d27a55dc-71d3-468f-b503-8436883c2771","Type":"ContainerStarted","Data":"3da885cb1a497446e4704b17b4b8aaf873885fce07483c60700f3f890b5ad6e2"} Nov 22 07:44:17 crc kubenswrapper[4858]: I1122 07:44:17.578487 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51625658-4507-4c2e-9a45-26ff0718bd44" path="/var/lib/kubelet/pods/51625658-4507-4c2e-9a45-26ff0718bd44/volumes" Nov 22 07:44:17 crc kubenswrapper[4858]: I1122 07:44:17.720146 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.226583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5744c7f6cf-flhrq" event={"ID":"eaa777a2-4dd0-407d-b615-34d7fcd0845b","Type":"ContainerStarted","Data":"6bf2d7b9ad4531e14c9327a6a63588e930346a2e2dcae212eff919b9b5b4719c"} Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.226784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5744c7f6cf-flhrq" event={"ID":"eaa777a2-4dd0-407d-b615-34d7fcd0845b","Type":"ContainerStarted","Data":"0fc2e8610b309ec2b9325b8a5fb9a64e0de3f594df62b7a0fe26ced79e91e89c"} Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.230211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-964b97968-m9n7r" event={"ID":"d27a55dc-71d3-468f-b503-8436883c2771","Type":"ContainerStarted","Data":"7d85dd2bf391a295963c1c04a60ba1230b2aacca17a1680433770b7be5c7e8c8"} Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.232227 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerStarted","Data":"eb0881e19514f6d81f52617dbf9be9e61f92e06b35ab0126735758e2523be484"} Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.234726 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" event={"ID":"04d1b1fd-682c-499c-8f5b-f22d4513217a","Type":"ContainerStarted","Data":"459ed18256c6e74e65f42b2044fae1a1c6a3d48927d45cffc496a022915a3956"} Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.234797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" event={"ID":"04d1b1fd-682c-499c-8f5b-f22d4513217a","Type":"ContainerStarted","Data":"06711e654f6c8f43dfb70d0e3d0cf613ddc8ac0aa5d4281e2d0aea5c99c77349"} Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.262579 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5744c7f6cf-flhrq" podStartSLOduration=4.209242518 podStartE2EDuration="8.262549458s" podCreationTimestamp="2025-11-22 07:44:10 +0000 UTC" firstStartedPulling="2025-11-22 07:44:13.101708724 +0000 UTC m=+2014.943131730" lastFinishedPulling="2025-11-22 07:44:17.155015664 +0000 UTC m=+2018.996438670" observedRunningTime="2025-11-22 07:44:18.252278219 +0000 UTC m=+2020.093701235" watchObservedRunningTime="2025-11-22 07:44:18.262549458 +0000 UTC m=+2020.103972464" Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.294197 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-964b97968-m9n7r" podStartSLOduration=5.294154441 podStartE2EDuration="5.294154441s" podCreationTimestamp="2025-11-22 07:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:18.283627524 +0000 UTC m=+2020.125050530" watchObservedRunningTime="2025-11-22 07:44:18.294154441 +0000 UTC m=+2020.135577457" Nov 22 07:44:18 crc kubenswrapper[4858]: I1122 07:44:18.315844 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" podStartSLOduration=2.604771766 podStartE2EDuration="8.315816064s" podCreationTimestamp="2025-11-22 07:44:10 +0000 UTC" firstStartedPulling="2025-11-22 07:44:11.444681198 +0000 UTC m=+2013.286104204" lastFinishedPulling="2025-11-22 07:44:17.155725496 +0000 UTC m=+2018.997148502" observedRunningTime="2025-11-22 07:44:18.313243632 +0000 UTC m=+2020.154666658" watchObservedRunningTime="2025-11-22 07:44:18.315816064 +0000 UTC m=+2020.157239070" Nov 22 07:44:19 crc kubenswrapper[4858]: I1122 07:44:19.063869 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:19 crc kubenswrapper[4858]: I1122 07:44:19.064503 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:20 crc kubenswrapper[4858]: I1122 07:44:20.145056 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:20 crc kubenswrapper[4858]: I1122 07:44:20.265389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerStarted","Data":"5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1"} Nov 22 07:44:20 crc kubenswrapper[4858]: I1122 07:44:20.519595 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:20 crc kubenswrapper[4858]: I1122 07:44:20.624706 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bdb874957-96wfv"] Nov 22 07:44:20 crc kubenswrapper[4858]: I1122 07:44:20.629904 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerName="dnsmasq-dns" containerID="cri-o://44f43be9ee1e6688eafaa0de4640204cd8d01b20ac225fb286e7ec36253259ee" gracePeriod=10 Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.319787 4858 generic.go:334] "Generic (PLEG): container finished" podID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerID="44f43be9ee1e6688eafaa0de4640204cd8d01b20ac225fb286e7ec36253259ee" exitCode=0 Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.320168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" event={"ID":"24a5bc60-0e0b-4a28-88b7-49321247f37a","Type":"ContainerDied","Data":"44f43be9ee1e6688eafaa0de4640204cd8d01b20ac225fb286e7ec36253259ee"} Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.606131 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.727712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-swift-storage-0\") pod \"24a5bc60-0e0b-4a28-88b7-49321247f37a\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.728140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-config\") pod \"24a5bc60-0e0b-4a28-88b7-49321247f37a\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.728356 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-svc\") pod \"24a5bc60-0e0b-4a28-88b7-49321247f37a\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.728526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-nb\") pod \"24a5bc60-0e0b-4a28-88b7-49321247f37a\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.728644 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-sb\") pod \"24a5bc60-0e0b-4a28-88b7-49321247f37a\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.728741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z7d5\" (UniqueName: \"kubernetes.io/projected/24a5bc60-0e0b-4a28-88b7-49321247f37a-kube-api-access-7z7d5\") pod \"24a5bc60-0e0b-4a28-88b7-49321247f37a\" (UID: \"24a5bc60-0e0b-4a28-88b7-49321247f37a\") " Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.755975 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a5bc60-0e0b-4a28-88b7-49321247f37a-kube-api-access-7z7d5" (OuterVolumeSpecName: "kube-api-access-7z7d5") pod "24a5bc60-0e0b-4a28-88b7-49321247f37a" (UID: "24a5bc60-0e0b-4a28-88b7-49321247f37a"). InnerVolumeSpecName "kube-api-access-7z7d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.816550 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24a5bc60-0e0b-4a28-88b7-49321247f37a" (UID: "24a5bc60-0e0b-4a28-88b7-49321247f37a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.834661 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z7d5\" (UniqueName: \"kubernetes.io/projected/24a5bc60-0e0b-4a28-88b7-49321247f37a-kube-api-access-7z7d5\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.834723 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.852112 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24a5bc60-0e0b-4a28-88b7-49321247f37a" (UID: "24a5bc60-0e0b-4a28-88b7-49321247f37a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.852870 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24a5bc60-0e0b-4a28-88b7-49321247f37a" (UID: "24a5bc60-0e0b-4a28-88b7-49321247f37a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.853533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24a5bc60-0e0b-4a28-88b7-49321247f37a" (UID: "24a5bc60-0e0b-4a28-88b7-49321247f37a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.880929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-config" (OuterVolumeSpecName: "config") pod "24a5bc60-0e0b-4a28-88b7-49321247f37a" (UID: "24a5bc60-0e0b-4a28-88b7-49321247f37a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.936738 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.936793 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.936803 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:22 crc kubenswrapper[4858]: I1122 07:44:22.936813 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a5bc60-0e0b-4a28-88b7-49321247f37a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.215839 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-964b97968-m9n7r" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.223208 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.237568 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-964b97968-m9n7r" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.337053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" event={"ID":"24a5bc60-0e0b-4a28-88b7-49321247f37a","Type":"ContainerDied","Data":"d0356efe655ce8d9e540887fbe58eab7f2cc027e1b20e931b6f9c3a1a21b2b51"} Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.337132 4858 scope.go:117] "RemoveContainer" containerID="44f43be9ee1e6688eafaa0de4640204cd8d01b20ac225fb286e7ec36253259ee" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.337290 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdb874957-96wfv" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.395968 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bdb874957-96wfv"] Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.422505 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bdb874957-96wfv"] Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.425062 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.462536 4858 scope.go:117] "RemoveContainer" containerID="7310120c55b4ce88603e9cd0c7b4f626edcfddefd6eb6e17c47588cbd584448b" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.580506 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" path="/var/lib/kubelet/pods/24a5bc60-0e0b-4a28-88b7-49321247f37a/volumes" Nov 22 07:44:23 crc kubenswrapper[4858]: I1122 07:44:23.602120 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:24 crc kubenswrapper[4858]: I1122 07:44:24.354366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerStarted","Data":"e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda"} Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.422343 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.429506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerStarted","Data":"35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33"} Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.731475 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.817509 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7749fbdfd4-nfjpp"] Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.817809 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api-log" containerID="cri-o://d4ef5730b65c22808bf234423220d0df97ffc893d3125fbb9c5491ef37c0888f" gracePeriod=30 Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.818396 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" containerID="cri-o://097f9ae9b7d5ed099348c9ce35d8cf273488ec1139ef85284f8dd20007731c47" gracePeriod=30 Nov 22 07:44:26 crc kubenswrapper[4858]: I1122 07:44:26.827005 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Nov 22 07:44:27 crc kubenswrapper[4858]: I1122 07:44:27.444709 4858 generic.go:334] "Generic (PLEG): container finished" podID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerID="d4ef5730b65c22808bf234423220d0df97ffc893d3125fbb9c5491ef37c0888f" exitCode=143 Nov 22 07:44:27 crc kubenswrapper[4858]: I1122 07:44:27.446636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7749fbdfd4-nfjpp" event={"ID":"856eefbb-a12a-4459-ac1c-9c54a222e2e7","Type":"ContainerDied","Data":"d4ef5730b65c22808bf234423220d0df97ffc893d3125fbb9c5491ef37c0888f"} Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.463050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerStarted","Data":"2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b"} Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.463852 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-central-agent" containerID="cri-o://5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1" gracePeriod=30 Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.464253 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.464771 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="proxy-httpd" containerID="cri-o://2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b" gracePeriod=30 Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.464837 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="sg-core" containerID="cri-o://35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33" gracePeriod=30 Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.464888 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-notification-agent" containerID="cri-o://e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda" gracePeriod=30 Nov 22 07:44:28 crc kubenswrapper[4858]: I1122 07:44:28.500653 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.872630841 podStartE2EDuration="13.500625399s" podCreationTimestamp="2025-11-22 07:44:15 +0000 UTC" firstStartedPulling="2025-11-22 07:44:17.720863698 +0000 UTC m=+2019.562286704" lastFinishedPulling="2025-11-22 07:44:27.348858246 +0000 UTC m=+2029.190281262" observedRunningTime="2025-11-22 07:44:28.486192416 +0000 UTC m=+2030.327615422" watchObservedRunningTime="2025-11-22 07:44:28.500625399 +0000 UTC m=+2030.342048405" Nov 22 07:44:29 crc kubenswrapper[4858]: I1122 07:44:29.484662 4858 generic.go:334] "Generic (PLEG): container finished" podID="447fe270-cadb-41bb-95fc-04055c14b5db" containerID="2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b" exitCode=0 Nov 22 07:44:29 crc kubenswrapper[4858]: I1122 07:44:29.485100 4858 generic.go:334] "Generic (PLEG): container finished" podID="447fe270-cadb-41bb-95fc-04055c14b5db" containerID="35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33" exitCode=2 Nov 22 07:44:29 crc kubenswrapper[4858]: I1122 07:44:29.485112 4858 generic.go:334] "Generic (PLEG): container finished" podID="447fe270-cadb-41bb-95fc-04055c14b5db" containerID="e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda" exitCode=0 Nov 22 07:44:29 crc kubenswrapper[4858]: I1122 07:44:29.485310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerDied","Data":"2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b"} Nov 22 07:44:29 crc kubenswrapper[4858]: I1122 07:44:29.485429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerDied","Data":"35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33"} Nov 22 07:44:29 crc kubenswrapper[4858]: I1122 07:44:29.485453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerDied","Data":"e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda"} Nov 22 07:44:30 crc kubenswrapper[4858]: I1122 07:44:30.496362 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hrts7" event={"ID":"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08","Type":"ContainerStarted","Data":"9fa0d715445d9cabd5993deddac4cf06600dfcb8a11d1fc5d81fa7dadce6684f"} Nov 22 07:44:30 crc kubenswrapper[4858]: I1122 07:44:30.521801 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hrts7" podStartSLOduration=2.045457204 podStartE2EDuration="33.521773777s" podCreationTimestamp="2025-11-22 07:43:57 +0000 UTC" firstStartedPulling="2025-11-22 07:43:58.207338239 +0000 UTC m=+2000.048761255" lastFinishedPulling="2025-11-22 07:44:29.683654822 +0000 UTC m=+2031.525077828" observedRunningTime="2025-11-22 07:44:30.515584069 +0000 UTC m=+2032.357007085" watchObservedRunningTime="2025-11-22 07:44:30.521773777 +0000 UTC m=+2032.363196783" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.221490 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354023 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-config-data\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354119 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-run-httpd\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354196 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-sg-core-conf-yaml\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4wwz\" (UniqueName: \"kubernetes.io/projected/447fe270-cadb-41bb-95fc-04055c14b5db-kube-api-access-g4wwz\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-log-httpd\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-scripts\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.354516 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-combined-ca-bundle\") pod \"447fe270-cadb-41bb-95fc-04055c14b5db\" (UID: \"447fe270-cadb-41bb-95fc-04055c14b5db\") " Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.355169 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.355358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.360794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-scripts" (OuterVolumeSpecName: "scripts") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.362978 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447fe270-cadb-41bb-95fc-04055c14b5db-kube-api-access-g4wwz" (OuterVolumeSpecName: "kube-api-access-g4wwz") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "kube-api-access-g4wwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.382900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.434073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.457564 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.457688 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.457757 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4wwz\" (UniqueName: \"kubernetes.io/projected/447fe270-cadb-41bb-95fc-04055c14b5db-kube-api-access-g4wwz\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.457861 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/447fe270-cadb-41bb-95fc-04055c14b5db-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.457945 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.458010 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.464369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-config-data" (OuterVolumeSpecName: "config-data") pod "447fe270-cadb-41bb-95fc-04055c14b5db" (UID: "447fe270-cadb-41bb-95fc-04055c14b5db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.508208 4858 generic.go:334] "Generic (PLEG): container finished" podID="447fe270-cadb-41bb-95fc-04055c14b5db" containerID="5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1" exitCode=0 Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.508263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerDied","Data":"5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1"} Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.508304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"447fe270-cadb-41bb-95fc-04055c14b5db","Type":"ContainerDied","Data":"eb0881e19514f6d81f52617dbf9be9e61f92e06b35ab0126735758e2523be484"} Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.508339 4858 scope.go:117] "RemoveContainer" containerID="2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.508373 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.539678 4858 scope.go:117] "RemoveContainer" containerID="35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.559881 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/447fe270-cadb-41bb-95fc-04055c14b5db-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.574912 4858 scope.go:117] "RemoveContainer" containerID="e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.583141 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.591433 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604184 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.604771 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="sg-core" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604798 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="sg-core" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.604816 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerName="init" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604824 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerName="init" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.604853 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-central-agent" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604863 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-central-agent" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.604889 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerName="dnsmasq-dns" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604896 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerName="dnsmasq-dns" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.604920 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-notification-agent" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604928 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-notification-agent" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.604941 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="proxy-httpd" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.604949 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="proxy-httpd" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.605870 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a5bc60-0e0b-4a28-88b7-49321247f37a" containerName="dnsmasq-dns" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.605894 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="sg-core" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.605909 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="proxy-httpd" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.605920 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-notification-agent" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.605927 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" containerName="ceilometer-central-agent" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.607818 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.615805 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.619356 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.619683 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.620743 4858 scope.go:117] "RemoveContainer" containerID="5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.654397 4858 scope.go:117] "RemoveContainer" containerID="2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.654871 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b\": container with ID starting with 2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b not found: ID does not exist" containerID="2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.654920 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b"} err="failed to get container status \"2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b\": rpc error: code = NotFound desc = could not find container \"2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b\": container with ID starting with 2241f901e4323ad085f4f32517358cbbbe5ff25e4abe8544ffc18f4f44911a8b not found: ID does not exist" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.654956 4858 scope.go:117] "RemoveContainer" containerID="35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.655274 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33\": container with ID starting with 35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33 not found: ID does not exist" containerID="35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.655363 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33"} err="failed to get container status \"35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33\": rpc error: code = NotFound desc = could not find container \"35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33\": container with ID starting with 35e8734b882ba39f66b2726f7c739ea69fa28e3f34839e7f67303befced46a33 not found: ID does not exist" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.655388 4858 scope.go:117] "RemoveContainer" containerID="e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.655634 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda\": container with ID starting with e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda not found: ID does not exist" containerID="e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.655654 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda"} err="failed to get container status \"e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda\": rpc error: code = NotFound desc = could not find container \"e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda\": container with ID starting with e55ee569e118c1d4a4c454553b2a7307e6f9f24f0aaacecfcb4f099866069bda not found: ID does not exist" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.655668 4858 scope.go:117] "RemoveContainer" containerID="5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1" Nov 22 07:44:31 crc kubenswrapper[4858]: E1122 07:44:31.655900 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1\": container with ID starting with 5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1 not found: ID does not exist" containerID="5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.655921 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1"} err="failed to get container status \"5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1\": rpc error: code = NotFound desc = could not find container \"5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1\": container with ID starting with 5e0725107b76ab0abd1aa54231dcfd87e7eb9bd205bb9d8200cca3b216dd18c1 not found: ID does not exist" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.662773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jvh\" (UniqueName: \"kubernetes.io/projected/2a492265-db6d-4f46-a344-b4ede2abf5bc-kube-api-access-75jvh\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.662878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-scripts\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.662928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-config-data\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.663249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.663352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.663421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.663895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766001 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-scripts\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-config-data\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.766297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jvh\" (UniqueName: \"kubernetes.io/projected/2a492265-db6d-4f46-a344-b4ede2abf5bc-kube-api-access-75jvh\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.767084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.768623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.772133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.773034 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-config-data\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.773184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.774638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-scripts\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.792224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jvh\" (UniqueName: \"kubernetes.io/projected/2a492265-db6d-4f46-a344-b4ede2abf5bc-kube-api-access-75jvh\") pod \"ceilometer-0\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " pod="openstack/ceilometer-0" Nov 22 07:44:31 crc kubenswrapper[4858]: I1122 07:44:31.948439 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.274843 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:44916->10.217.0.160:9311: read: connection reset by peer" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.274833 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7749fbdfd4-nfjpp" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:44906->10.217.0.160:9311: read: connection reset by peer" Nov 22 07:44:32 crc kubenswrapper[4858]: W1122 07:44:32.532591 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a492265_db6d_4f46_a344_b4ede2abf5bc.slice/crio-896be8f2ae2d353b56905370a8c82bddfa38fe53c50fe1a7594405ae6218e506 WatchSource:0}: Error finding container 896be8f2ae2d353b56905370a8c82bddfa38fe53c50fe1a7594405ae6218e506: Status 404 returned error can't find the container with id 896be8f2ae2d353b56905370a8c82bddfa38fe53c50fe1a7594405ae6218e506 Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.541383 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.553170 4858 generic.go:334] "Generic (PLEG): container finished" podID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerID="097f9ae9b7d5ed099348c9ce35d8cf273488ec1139ef85284f8dd20007731c47" exitCode=0 Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.553238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7749fbdfd4-nfjpp" event={"ID":"856eefbb-a12a-4459-ac1c-9c54a222e2e7","Type":"ContainerDied","Data":"097f9ae9b7d5ed099348c9ce35d8cf273488ec1139ef85284f8dd20007731c47"} Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.828355 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.893455 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-combined-ca-bundle\") pod \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.893623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data\") pod \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.894572 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data-custom\") pod \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.894626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856eefbb-a12a-4459-ac1c-9c54a222e2e7-logs\") pod \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.894722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhllw\" (UniqueName: \"kubernetes.io/projected/856eefbb-a12a-4459-ac1c-9c54a222e2e7-kube-api-access-dhllw\") pod \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\" (UID: \"856eefbb-a12a-4459-ac1c-9c54a222e2e7\") " Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.895419 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856eefbb-a12a-4459-ac1c-9c54a222e2e7-logs" (OuterVolumeSpecName: "logs") pod "856eefbb-a12a-4459-ac1c-9c54a222e2e7" (UID: "856eefbb-a12a-4459-ac1c-9c54a222e2e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.895768 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856eefbb-a12a-4459-ac1c-9c54a222e2e7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.903720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856eefbb-a12a-4459-ac1c-9c54a222e2e7-kube-api-access-dhllw" (OuterVolumeSpecName: "kube-api-access-dhllw") pod "856eefbb-a12a-4459-ac1c-9c54a222e2e7" (UID: "856eefbb-a12a-4459-ac1c-9c54a222e2e7"). InnerVolumeSpecName "kube-api-access-dhllw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.905619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "856eefbb-a12a-4459-ac1c-9c54a222e2e7" (UID: "856eefbb-a12a-4459-ac1c-9c54a222e2e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.935230 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856eefbb-a12a-4459-ac1c-9c54a222e2e7" (UID: "856eefbb-a12a-4459-ac1c-9c54a222e2e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.961718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data" (OuterVolumeSpecName: "config-data") pod "856eefbb-a12a-4459-ac1c-9c54a222e2e7" (UID: "856eefbb-a12a-4459-ac1c-9c54a222e2e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.998082 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhllw\" (UniqueName: \"kubernetes.io/projected/856eefbb-a12a-4459-ac1c-9c54a222e2e7-kube-api-access-dhllw\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.998136 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.998150 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:32 crc kubenswrapper[4858]: I1122 07:44:32.998163 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856eefbb-a12a-4459-ac1c-9c54a222e2e7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.607963 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447fe270-cadb-41bb-95fc-04055c14b5db" path="/var/lib/kubelet/pods/447fe270-cadb-41bb-95fc-04055c14b5db/volumes" Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.611498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerStarted","Data":"896be8f2ae2d353b56905370a8c82bddfa38fe53c50fe1a7594405ae6218e506"} Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.612239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7749fbdfd4-nfjpp" event={"ID":"856eefbb-a12a-4459-ac1c-9c54a222e2e7","Type":"ContainerDied","Data":"1f261594090e80fc22628126767077a84c56772234b1d58e178a713ea3a15dd6"} Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.612313 4858 scope.go:117] "RemoveContainer" containerID="097f9ae9b7d5ed099348c9ce35d8cf273488ec1139ef85284f8dd20007731c47" Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.612581 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7749fbdfd4-nfjpp" Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.676813 4858 scope.go:117] "RemoveContainer" containerID="d4ef5730b65c22808bf234423220d0df97ffc893d3125fbb9c5491ef37c0888f" Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.683740 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7749fbdfd4-nfjpp"] Nov 22 07:44:33 crc kubenswrapper[4858]: I1122 07:44:33.696217 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7749fbdfd4-nfjpp"] Nov 22 07:44:34 crc kubenswrapper[4858]: I1122 07:44:34.630165 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerStarted","Data":"9d59335978a6458bef85f0af179b1af2ea8edd604a6e2cf0b1d9da4d96d94fd1"} Nov 22 07:44:35 crc kubenswrapper[4858]: I1122 07:44:35.549542 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" path="/var/lib/kubelet/pods/856eefbb-a12a-4459-ac1c-9c54a222e2e7/volumes" Nov 22 07:44:35 crc kubenswrapper[4858]: I1122 07:44:35.648250 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" containerID="0be496c05b6ca9bbc0552d43b838acc7ab82ea2f2a395f854baaaaee0619ac0a" exitCode=0 Nov 22 07:44:35 crc kubenswrapper[4858]: I1122 07:44:35.648352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hfjnq" event={"ID":"6f5e0507-55cd-49e4-bf31-1e13d0bfee53","Type":"ContainerDied","Data":"0be496c05b6ca9bbc0552d43b838acc7ab82ea2f2a395f854baaaaee0619ac0a"} Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.053587 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.089986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-scripts\") pod \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.090031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-combined-ca-bundle\") pod \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.090131 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-etc-machine-id\") pod \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.090221 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7kfj\" (UniqueName: \"kubernetes.io/projected/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-kube-api-access-g7kfj\") pod \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.090295 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-config-data\") pod \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.090383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-db-sync-config-data\") pod \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\" (UID: \"6f5e0507-55cd-49e4-bf31-1e13d0bfee53\") " Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.090958 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f5e0507-55cd-49e4-bf31-1e13d0bfee53" (UID: "6f5e0507-55cd-49e4-bf31-1e13d0bfee53"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.097671 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6f5e0507-55cd-49e4-bf31-1e13d0bfee53" (UID: "6f5e0507-55cd-49e4-bf31-1e13d0bfee53"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.098535 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-kube-api-access-g7kfj" (OuterVolumeSpecName: "kube-api-access-g7kfj") pod "6f5e0507-55cd-49e4-bf31-1e13d0bfee53" (UID: "6f5e0507-55cd-49e4-bf31-1e13d0bfee53"). InnerVolumeSpecName "kube-api-access-g7kfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.100305 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-scripts" (OuterVolumeSpecName: "scripts") pod "6f5e0507-55cd-49e4-bf31-1e13d0bfee53" (UID: "6f5e0507-55cd-49e4-bf31-1e13d0bfee53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.126220 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5e0507-55cd-49e4-bf31-1e13d0bfee53" (UID: "6f5e0507-55cd-49e4-bf31-1e13d0bfee53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.154906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-config-data" (OuterVolumeSpecName: "config-data") pod "6f5e0507-55cd-49e4-bf31-1e13d0bfee53" (UID: "6f5e0507-55cd-49e4-bf31-1e13d0bfee53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.194021 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7kfj\" (UniqueName: \"kubernetes.io/projected/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-kube-api-access-g7kfj\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.194075 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.194089 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.194102 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.194113 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.194131 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0507-55cd-49e4-bf31-1e13d0bfee53-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.671988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerStarted","Data":"bba2fab9280d077598992e8924894db511ebabc97507b323542ae8516afb6e96"} Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.678114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hfjnq" event={"ID":"6f5e0507-55cd-49e4-bf31-1e13d0bfee53","Type":"ContainerDied","Data":"7040ee90c8c4ef20bef095ba75b61745a4664c7e7d7ba5855b21168a60cbb2ee"} Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.678177 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7040ee90c8c4ef20bef095ba75b61745a4664c7e7d7ba5855b21168a60cbb2ee" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.678262 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hfjnq" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.980750 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:37 crc kubenswrapper[4858]: E1122 07:44:37.981554 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" containerName="cinder-db-sync" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.981579 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" containerName="cinder-db-sync" Nov 22 07:44:37 crc kubenswrapper[4858]: E1122 07:44:37.981612 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api-log" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.981620 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api-log" Nov 22 07:44:37 crc kubenswrapper[4858]: E1122 07:44:37.981642 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.981651 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.981862 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" containerName="cinder-db-sync" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.981887 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api-log" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.981895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="856eefbb-a12a-4459-ac1c-9c54a222e2e7" containerName="barbican-api" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.982936 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.988235 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.988550 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dpj5x" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.990249 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 07:44:37 crc kubenswrapper[4858]: I1122 07:44:37.990481 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.026383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.026454 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b18261f-1731-4a66-a7ea-f87a901c8b82-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.026508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.026606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.026647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.026693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gg7p\" (UniqueName: \"kubernetes.io/projected/7b18261f-1731-4a66-a7ea-f87a901c8b82-kube-api-access-8gg7p\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.028561 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.126419 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f45f7cc5-gj9xc"] Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.129012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.129454 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.129568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b18261f-1731-4a66-a7ea-f87a901c8b82-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.129676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.129845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.130017 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b18261f-1731-4a66-a7ea-f87a901c8b82-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.131454 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.131554 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gg7p\" (UniqueName: \"kubernetes.io/projected/7b18261f-1731-4a66-a7ea-f87a901c8b82-kube-api-access-8gg7p\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.146021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-scripts\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.146141 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.149447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.151003 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f45f7cc5-gj9xc"] Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.147201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.178115 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gg7p\" (UniqueName: \"kubernetes.io/projected/7b18261f-1731-4a66-a7ea-f87a901c8b82-kube-api-access-8gg7p\") pod \"cinder-scheduler-0\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.232665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-svc\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.232715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-config\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.232890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-nb\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.233005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-swift-storage-0\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.233136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-sb\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.233171 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh552\" (UniqueName: \"kubernetes.io/projected/2f3ebd41-7392-415b-8e54-f56644e0f6e3-kube-api-access-dh552\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.314418 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.335063 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-svc\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.335127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-config\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.335208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-nb\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.335244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-swift-storage-0\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.335301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-sb\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.335344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh552\" (UniqueName: \"kubernetes.io/projected/2f3ebd41-7392-415b-8e54-f56644e0f6e3-kube-api-access-dh552\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.336673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-nb\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.336674 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-config\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.336851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-swift-storage-0\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.336860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-sb\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.337626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-svc\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.387824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh552\" (UniqueName: \"kubernetes.io/projected/2f3ebd41-7392-415b-8e54-f56644e0f6e3-kube-api-access-dh552\") pod \"dnsmasq-dns-69f45f7cc5-gj9xc\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.551740 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.554220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.562819 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.566549 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.588837 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.644936 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.645071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.645132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec310ff6-6cfd-480d-b740-6dff362667dc-logs\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.645185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dlg\" (UniqueName: \"kubernetes.io/projected/ec310ff6-6cfd-480d-b740-6dff362667dc-kube-api-access-r7dlg\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.645247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.645313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec310ff6-6cfd-480d-b740-6dff362667dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.650139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-scripts\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec310ff6-6cfd-480d-b740-6dff362667dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-scripts\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756239 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec310ff6-6cfd-480d-b740-6dff362667dc-logs\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dlg\" (UniqueName: \"kubernetes.io/projected/ec310ff6-6cfd-480d-b740-6dff362667dc-kube-api-access-r7dlg\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.756400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.760926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec310ff6-6cfd-480d-b740-6dff362667dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.761654 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec310ff6-6cfd-480d-b740-6dff362667dc-logs\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.769120 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.773791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.775019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-scripts\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.784083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.818073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dlg\" (UniqueName: \"kubernetes.io/projected/ec310ff6-6cfd-480d-b740-6dff362667dc-kube-api-access-r7dlg\") pod \"cinder-api-0\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " pod="openstack/cinder-api-0" Nov 22 07:44:38 crc kubenswrapper[4858]: I1122 07:44:38.888354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:44:39 crc kubenswrapper[4858]: I1122 07:44:39.189292 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:39 crc kubenswrapper[4858]: I1122 07:44:39.340494 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f45f7cc5-gj9xc"] Nov 22 07:44:39 crc kubenswrapper[4858]: I1122 07:44:39.626807 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:39 crc kubenswrapper[4858]: W1122 07:44:39.632474 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec310ff6_6cfd_480d_b740_6dff362667dc.slice/crio-52e62a935658d7703a21c034e779daf7af382686e7620af4e257a3298e6c838d WatchSource:0}: Error finding container 52e62a935658d7703a21c034e779daf7af382686e7620af4e257a3298e6c838d: Status 404 returned error can't find the container with id 52e62a935658d7703a21c034e779daf7af382686e7620af4e257a3298e6c838d Nov 22 07:44:39 crc kubenswrapper[4858]: I1122 07:44:39.726626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec310ff6-6cfd-480d-b740-6dff362667dc","Type":"ContainerStarted","Data":"52e62a935658d7703a21c034e779daf7af382686e7620af4e257a3298e6c838d"} Nov 22 07:44:39 crc kubenswrapper[4858]: I1122 07:44:39.733876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b18261f-1731-4a66-a7ea-f87a901c8b82","Type":"ContainerStarted","Data":"2d5881fa70d878e90002ec9a3f9a2455fce05398e2faacd091a1674ed4035583"} Nov 22 07:44:39 crc kubenswrapper[4858]: I1122 07:44:39.737027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" event={"ID":"2f3ebd41-7392-415b-8e54-f56644e0f6e3","Type":"ContainerStarted","Data":"7bd5d7a3bfba924c4dc92e820a8ada9122469e1d9df0de5dd765a9b447b53047"} Nov 22 07:44:40 crc kubenswrapper[4858]: E1122 07:44:40.062952 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f3ebd41_7392_415b_8e54_f56644e0f6e3.slice/crio-7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:44:40 crc kubenswrapper[4858]: I1122 07:44:40.763184 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerID="7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0" exitCode=0 Nov 22 07:44:40 crc kubenswrapper[4858]: I1122 07:44:40.764278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" event={"ID":"2f3ebd41-7392-415b-8e54-f56644e0f6e3","Type":"ContainerDied","Data":"7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0"} Nov 22 07:44:40 crc kubenswrapper[4858]: I1122 07:44:40.903134 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:41 crc kubenswrapper[4858]: I1122 07:44:41.793654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" event={"ID":"2f3ebd41-7392-415b-8e54-f56644e0f6e3","Type":"ContainerStarted","Data":"29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44"} Nov 22 07:44:41 crc kubenswrapper[4858]: I1122 07:44:41.797960 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:41 crc kubenswrapper[4858]: I1122 07:44:41.807792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerStarted","Data":"494f0861d9b62138516c7826cd32b91f4960e96d7f753c57935abef9ea29daea"} Nov 22 07:44:41 crc kubenswrapper[4858]: I1122 07:44:41.818522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec310ff6-6cfd-480d-b740-6dff362667dc","Type":"ContainerStarted","Data":"fb91c4ea7714c5d907be66358f1092d1da7b36e176a6febed23e0441649976bd"} Nov 22 07:44:41 crc kubenswrapper[4858]: I1122 07:44:41.846189 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" podStartSLOduration=3.846156613 podStartE2EDuration="3.846156613s" podCreationTimestamp="2025-11-22 07:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:41.836865295 +0000 UTC m=+2043.678288321" watchObservedRunningTime="2025-11-22 07:44:41.846156613 +0000 UTC m=+2043.687579619" Nov 22 07:44:42 crc kubenswrapper[4858]: I1122 07:44:42.840730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b18261f-1731-4a66-a7ea-f87a901c8b82","Type":"ContainerStarted","Data":"ce587afe82080d6a71aaaa036360b6368fb78f681039713791e599790c72d82b"} Nov 22 07:44:42 crc kubenswrapper[4858]: I1122 07:44:42.855989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec310ff6-6cfd-480d-b740-6dff362667dc","Type":"ContainerStarted","Data":"0d03c825d0a932f41a6654eb8bade5999bfe7541a53e5fa5227e7be8ad23cac6"} Nov 22 07:44:42 crc kubenswrapper[4858]: I1122 07:44:42.856172 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api-log" containerID="cri-o://fb91c4ea7714c5d907be66358f1092d1da7b36e176a6febed23e0441649976bd" gracePeriod=30 Nov 22 07:44:42 crc kubenswrapper[4858]: I1122 07:44:42.856690 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api" containerID="cri-o://0d03c825d0a932f41a6654eb8bade5999bfe7541a53e5fa5227e7be8ad23cac6" gracePeriod=30 Nov 22 07:44:42 crc kubenswrapper[4858]: I1122 07:44:42.893774 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.893736358 podStartE2EDuration="4.893736358s" podCreationTimestamp="2025-11-22 07:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:42.880972369 +0000 UTC m=+2044.722395395" watchObservedRunningTime="2025-11-22 07:44:42.893736358 +0000 UTC m=+2044.735159364" Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.873305 4858 generic.go:334] "Generic (PLEG): container finished" podID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerID="0d03c825d0a932f41a6654eb8bade5999bfe7541a53e5fa5227e7be8ad23cac6" exitCode=0 Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.874255 4858 generic.go:334] "Generic (PLEG): container finished" podID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerID="fb91c4ea7714c5d907be66358f1092d1da7b36e176a6febed23e0441649976bd" exitCode=143 Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.874673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec310ff6-6cfd-480d-b740-6dff362667dc","Type":"ContainerDied","Data":"0d03c825d0a932f41a6654eb8bade5999bfe7541a53e5fa5227e7be8ad23cac6"} Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.874722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec310ff6-6cfd-480d-b740-6dff362667dc","Type":"ContainerDied","Data":"fb91c4ea7714c5d907be66358f1092d1da7b36e176a6febed23e0441649976bd"} Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.883610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b18261f-1731-4a66-a7ea-f87a901c8b82","Type":"ContainerStarted","Data":"0d566c5acef6f47e009142bdd9d43c32902c51cf6f73bccb6a9953c59d3f12a5"} Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.897154 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 07:44:43 crc kubenswrapper[4858]: I1122 07:44:43.930145 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.027644208 podStartE2EDuration="6.930116364s" podCreationTimestamp="2025-11-22 07:44:37 +0000 UTC" firstStartedPulling="2025-11-22 07:44:39.195537163 +0000 UTC m=+2041.036960169" lastFinishedPulling="2025-11-22 07:44:41.098009319 +0000 UTC m=+2042.939432325" observedRunningTime="2025-11-22 07:44:43.924392331 +0000 UTC m=+2045.765815337" watchObservedRunningTime="2025-11-22 07:44:43.930116364 +0000 UTC m=+2045.771539370" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.153543 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.233746 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec310ff6-6cfd-480d-b740-6dff362667dc-etc-machine-id\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-scripts\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234380 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-combined-ca-bundle\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dlg\" (UniqueName: \"kubernetes.io/projected/ec310ff6-6cfd-480d-b740-6dff362667dc-kube-api-access-r7dlg\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234634 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data-custom\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234028 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec310ff6-6cfd-480d-b740-6dff362667dc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.234719 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec310ff6-6cfd-480d-b740-6dff362667dc-logs\") pod \"ec310ff6-6cfd-480d-b740-6dff362667dc\" (UID: \"ec310ff6-6cfd-480d-b740-6dff362667dc\") " Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.236267 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec310ff6-6cfd-480d-b740-6dff362667dc-logs" (OuterVolumeSpecName: "logs") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.236402 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec310ff6-6cfd-480d-b740-6dff362667dc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.243448 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.255874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-scripts" (OuterVolumeSpecName: "scripts") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.268404 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec310ff6-6cfd-480d-b740-6dff362667dc-kube-api-access-r7dlg" (OuterVolumeSpecName: "kube-api-access-r7dlg") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "kube-api-access-r7dlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.272446 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.296610 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data" (OuterVolumeSpecName: "config-data") pod "ec310ff6-6cfd-480d-b740-6dff362667dc" (UID: "ec310ff6-6cfd-480d-b740-6dff362667dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.339750 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.340088 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.340239 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.340941 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7dlg\" (UniqueName: \"kubernetes.io/projected/ec310ff6-6cfd-480d-b740-6dff362667dc-kube-api-access-r7dlg\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.341063 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec310ff6-6cfd-480d-b740-6dff362667dc-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.341146 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec310ff6-6cfd-480d-b740-6dff362667dc-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.899131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerStarted","Data":"1a91d2cfa96fa6edb7decc0c115f85dcf4cbeb3ff22a2ae3f3f985a415c238d5"} Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.899790 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.901942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec310ff6-6cfd-480d-b740-6dff362667dc","Type":"ContainerDied","Data":"52e62a935658d7703a21c034e779daf7af382686e7620af4e257a3298e6c838d"} Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.902018 4858 scope.go:117] "RemoveContainer" containerID="0d03c825d0a932f41a6654eb8bade5999bfe7541a53e5fa5227e7be8ad23cac6" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.902187 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.932838 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7980186270000003 podStartE2EDuration="13.932810151s" podCreationTimestamp="2025-11-22 07:44:31 +0000 UTC" firstStartedPulling="2025-11-22 07:44:32.551150549 +0000 UTC m=+2034.392573555" lastFinishedPulling="2025-11-22 07:44:43.685942083 +0000 UTC m=+2045.527365079" observedRunningTime="2025-11-22 07:44:44.923666768 +0000 UTC m=+2046.765089774" watchObservedRunningTime="2025-11-22 07:44:44.932810151 +0000 UTC m=+2046.774233157" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.944985 4858 scope.go:117] "RemoveContainer" containerID="fb91c4ea7714c5d907be66358f1092d1da7b36e176a6febed23e0441649976bd" Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.955932 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:44 crc kubenswrapper[4858]: I1122 07:44:44.965411 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.009826 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:45 crc kubenswrapper[4858]: E1122 07:44:45.010983 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api-log" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.011111 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api-log" Nov 22 07:44:45 crc kubenswrapper[4858]: E1122 07:44:45.011205 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.011273 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.011751 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.011857 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" containerName="cinder-api-log" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.013394 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.021210 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.021970 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.022262 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.029637 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.060604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-scripts\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.060659 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daa57087-ec21-4cff-aa47-68358e8f5039-logs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.060693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/daa57087-ec21-4cff-aa47-68358e8f5039-kube-api-access-9fz5f\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.060748 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.060783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.060879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-public-tls-certs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.061088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data-custom\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.061115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daa57087-ec21-4cff-aa47-68358e8f5039-etc-machine-id\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.061182 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.163296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-scripts\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.163483 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daa57087-ec21-4cff-aa47-68358e8f5039-logs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.163510 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/daa57087-ec21-4cff-aa47-68358e8f5039-kube-api-access-9fz5f\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.164900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daa57087-ec21-4cff-aa47-68358e8f5039-logs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.164963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.165023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.165482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-public-tls-certs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.165563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data-custom\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.165599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daa57087-ec21-4cff-aa47-68358e8f5039-etc-machine-id\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.165673 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.166065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daa57087-ec21-4cff-aa47-68358e8f5039-etc-machine-id\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.173650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.174266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.175533 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.175580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data-custom\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.176992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-public-tls-certs\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.177476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-scripts\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.186221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/daa57087-ec21-4cff-aa47-68358e8f5039-kube-api-access-9fz5f\") pod \"cinder-api-0\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.348544 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.563958 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec310ff6-6cfd-480d-b740-6dff362667dc" path="/var/lib/kubelet/pods/ec310ff6-6cfd-480d-b740-6dff362667dc/volumes" Nov 22 07:44:45 crc kubenswrapper[4858]: W1122 07:44:45.880790 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaa57087_ec21_4cff_aa47_68358e8f5039.slice/crio-67fc3a439c377f091fb039fe5acb5199df498d36089cb786318b3f43f7b700b2 WatchSource:0}: Error finding container 67fc3a439c377f091fb039fe5acb5199df498d36089cb786318b3f43f7b700b2: Status 404 returned error can't find the container with id 67fc3a439c377f091fb039fe5acb5199df498d36089cb786318b3f43f7b700b2 Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.903413 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:44:45 crc kubenswrapper[4858]: I1122 07:44:45.934610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"daa57087-ec21-4cff-aa47-68358e8f5039","Type":"ContainerStarted","Data":"67fc3a439c377f091fb039fe5acb5199df498d36089cb786318b3f43f7b700b2"} Nov 22 07:44:46 crc kubenswrapper[4858]: I1122 07:44:46.949069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"daa57087-ec21-4cff-aa47-68358e8f5039","Type":"ContainerStarted","Data":"ab0829a2a45dd01e2464217508ca78a6a10b04e91b998fe9def047ef8aebbd38"} Nov 22 07:44:47 crc kubenswrapper[4858]: I1122 07:44:47.963470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"daa57087-ec21-4cff-aa47-68358e8f5039","Type":"ContainerStarted","Data":"98e56862b8436374df28d433ceba2eba7598bc78c7a8982ea0f1f152b99d551a"} Nov 22 07:44:48 crc kubenswrapper[4858]: I1122 07:44:48.314961 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 07:44:48 crc kubenswrapper[4858]: I1122 07:44:48.568557 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:44:48 crc kubenswrapper[4858]: I1122 07:44:48.658452 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6948d6454f-5zfp7"] Nov 22 07:44:48 crc kubenswrapper[4858]: I1122 07:44:48.658726 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerName="dnsmasq-dns" containerID="cri-o://a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4" gracePeriod=10 Nov 22 07:44:48 crc kubenswrapper[4858]: I1122 07:44:48.695251 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:44:48 crc kubenswrapper[4858]: I1122 07:44:48.975584 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.016508 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.016473123 podStartE2EDuration="5.016473123s" podCreationTimestamp="2025-11-22 07:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:49.001583946 +0000 UTC m=+2050.843006982" watchObservedRunningTime="2025-11-22 07:44:49.016473123 +0000 UTC m=+2050.857896129" Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.938768 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.997259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-nb\") pod \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.997381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-sb\") pod \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.997432 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-swift-storage-0\") pod \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.997559 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-config\") pod \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.997614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-svc\") pod \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " Nov 22 07:44:49 crc kubenswrapper[4858]: I1122 07:44:49.997736 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q55wq\" (UniqueName: \"kubernetes.io/projected/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-kube-api-access-q55wq\") pod \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\" (UID: \"faf4098e-38a9-4ccf-bd60-7eccc9c294b0\") " Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.004409 4858 generic.go:334] "Generic (PLEG): container finished" podID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerID="a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4" exitCode=0 Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.005739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-kube-api-access-q55wq" (OuterVolumeSpecName: "kube-api-access-q55wq") pod "faf4098e-38a9-4ccf-bd60-7eccc9c294b0" (UID: "faf4098e-38a9-4ccf-bd60-7eccc9c294b0"). InnerVolumeSpecName "kube-api-access-q55wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.006408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" event={"ID":"faf4098e-38a9-4ccf-bd60-7eccc9c294b0","Type":"ContainerDied","Data":"a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4"} Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.006477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" event={"ID":"faf4098e-38a9-4ccf-bd60-7eccc9c294b0","Type":"ContainerDied","Data":"89576361d8617a5c7049151cc3e281797d00272243202a711381c8612b5ea910"} Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.006497 4858 scope.go:117] "RemoveContainer" containerID="a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.006773 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6948d6454f-5zfp7" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.064528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "faf4098e-38a9-4ccf-bd60-7eccc9c294b0" (UID: "faf4098e-38a9-4ccf-bd60-7eccc9c294b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.076471 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "faf4098e-38a9-4ccf-bd60-7eccc9c294b0" (UID: "faf4098e-38a9-4ccf-bd60-7eccc9c294b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.078192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "faf4098e-38a9-4ccf-bd60-7eccc9c294b0" (UID: "faf4098e-38a9-4ccf-bd60-7eccc9c294b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.081795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-config" (OuterVolumeSpecName: "config") pod "faf4098e-38a9-4ccf-bd60-7eccc9c294b0" (UID: "faf4098e-38a9-4ccf-bd60-7eccc9c294b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.102374 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.102822 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.102852 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q55wq\" (UniqueName: \"kubernetes.io/projected/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-kube-api-access-q55wq\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.102885 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.102899 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.119639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "faf4098e-38a9-4ccf-bd60-7eccc9c294b0" (UID: "faf4098e-38a9-4ccf-bd60-7eccc9c294b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.204905 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faf4098e-38a9-4ccf-bd60-7eccc9c294b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.212601 4858 scope.go:117] "RemoveContainer" containerID="765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.244339 4858 scope.go:117] "RemoveContainer" containerID="a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4" Nov 22 07:44:50 crc kubenswrapper[4858]: E1122 07:44:50.246558 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4\": container with ID starting with a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4 not found: ID does not exist" containerID="a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.246603 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4"} err="failed to get container status \"a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4\": rpc error: code = NotFound desc = could not find container \"a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4\": container with ID starting with a6d7cf0d01cdaca6ffc05407eb144de86641ccafe4ba06ec435392ebad51d4e4 not found: ID does not exist" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.246637 4858 scope.go:117] "RemoveContainer" containerID="765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf" Nov 22 07:44:50 crc kubenswrapper[4858]: E1122 07:44:50.247160 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf\": container with ID starting with 765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf not found: ID does not exist" containerID="765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.247194 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf"} err="failed to get container status \"765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf\": rpc error: code = NotFound desc = could not find container \"765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf\": container with ID starting with 765ed7d5a172e2ffa80f4172a05e2826e0887e0f0c81b7df18d3b777debbdebf not found: ID does not exist" Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.364126 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6948d6454f-5zfp7"] Nov 22 07:44:50 crc kubenswrapper[4858]: I1122 07:44:50.379661 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6948d6454f-5zfp7"] Nov 22 07:44:50 crc kubenswrapper[4858]: E1122 07:44:50.423514 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf4098e_38a9_4ccf_bd60_7eccc9c294b0.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:44:51 crc kubenswrapper[4858]: I1122 07:44:51.552169 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" path="/var/lib/kubelet/pods/faf4098e-38a9-4ccf-bd60-7eccc9c294b0/volumes" Nov 22 07:44:53 crc kubenswrapper[4858]: I1122 07:44:53.713529 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 07:44:53 crc kubenswrapper[4858]: I1122 07:44:53.786774 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:54 crc kubenswrapper[4858]: I1122 07:44:54.048757 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="cinder-scheduler" containerID="cri-o://ce587afe82080d6a71aaaa036360b6368fb78f681039713791e599790c72d82b" gracePeriod=30 Nov 22 07:44:54 crc kubenswrapper[4858]: I1122 07:44:54.049033 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="probe" containerID="cri-o://0d566c5acef6f47e009142bdd9d43c32902c51cf6f73bccb6a9953c59d3f12a5" gracePeriod=30 Nov 22 07:44:55 crc kubenswrapper[4858]: I1122 07:44:55.063088 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerID="0d566c5acef6f47e009142bdd9d43c32902c51cf6f73bccb6a9953c59d3f12a5" exitCode=0 Nov 22 07:44:55 crc kubenswrapper[4858]: I1122 07:44:55.063152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b18261f-1731-4a66-a7ea-f87a901c8b82","Type":"ContainerDied","Data":"0d566c5acef6f47e009142bdd9d43c32902c51cf6f73bccb6a9953c59d3f12a5"} Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.097540 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerID="ce587afe82080d6a71aaaa036360b6368fb78f681039713791e599790c72d82b" exitCode=0 Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.098251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b18261f-1731-4a66-a7ea-f87a901c8b82","Type":"ContainerDied","Data":"ce587afe82080d6a71aaaa036360b6368fb78f681039713791e599790c72d82b"} Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.270247 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.455470 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gg7p\" (UniqueName: \"kubernetes.io/projected/7b18261f-1731-4a66-a7ea-f87a901c8b82-kube-api-access-8gg7p\") pod \"7b18261f-1731-4a66-a7ea-f87a901c8b82\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.456070 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-combined-ca-bundle\") pod \"7b18261f-1731-4a66-a7ea-f87a901c8b82\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.456168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data\") pod \"7b18261f-1731-4a66-a7ea-f87a901c8b82\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.456269 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-scripts\") pod \"7b18261f-1731-4a66-a7ea-f87a901c8b82\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.456456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b18261f-1731-4a66-a7ea-f87a901c8b82-etc-machine-id\") pod \"7b18261f-1731-4a66-a7ea-f87a901c8b82\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.456540 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data-custom\") pod \"7b18261f-1731-4a66-a7ea-f87a901c8b82\" (UID: \"7b18261f-1731-4a66-a7ea-f87a901c8b82\") " Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.456532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b18261f-1731-4a66-a7ea-f87a901c8b82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7b18261f-1731-4a66-a7ea-f87a901c8b82" (UID: "7b18261f-1731-4a66-a7ea-f87a901c8b82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.457420 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b18261f-1731-4a66-a7ea-f87a901c8b82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.466343 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-scripts" (OuterVolumeSpecName: "scripts") pod "7b18261f-1731-4a66-a7ea-f87a901c8b82" (UID: "7b18261f-1731-4a66-a7ea-f87a901c8b82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.475608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b18261f-1731-4a66-a7ea-f87a901c8b82-kube-api-access-8gg7p" (OuterVolumeSpecName: "kube-api-access-8gg7p") pod "7b18261f-1731-4a66-a7ea-f87a901c8b82" (UID: "7b18261f-1731-4a66-a7ea-f87a901c8b82"). InnerVolumeSpecName "kube-api-access-8gg7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.478586 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7b18261f-1731-4a66-a7ea-f87a901c8b82" (UID: "7b18261f-1731-4a66-a7ea-f87a901c8b82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.514458 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b18261f-1731-4a66-a7ea-f87a901c8b82" (UID: "7b18261f-1731-4a66-a7ea-f87a901c8b82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.559462 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gg7p\" (UniqueName: \"kubernetes.io/projected/7b18261f-1731-4a66-a7ea-f87a901c8b82-kube-api-access-8gg7p\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.560197 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.560242 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.560256 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.563228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data" (OuterVolumeSpecName: "config-data") pod "7b18261f-1731-4a66-a7ea-f87a901c8b82" (UID: "7b18261f-1731-4a66-a7ea-f87a901c8b82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:57 crc kubenswrapper[4858]: I1122 07:44:57.662487 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b18261f-1731-4a66-a7ea-f87a901c8b82-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.073516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.111070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7b18261f-1731-4a66-a7ea-f87a901c8b82","Type":"ContainerDied","Data":"2d5881fa70d878e90002ec9a3f9a2455fce05398e2faacd091a1674ed4035583"} Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.112332 4858 scope.go:117] "RemoveContainer" containerID="0d566c5acef6f47e009142bdd9d43c32902c51cf6f73bccb6a9953c59d3f12a5" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.111149 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.140753 4858 scope.go:117] "RemoveContainer" containerID="ce587afe82080d6a71aaaa036360b6368fb78f681039713791e599790c72d82b" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.164440 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.175161 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.245417 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:58 crc kubenswrapper[4858]: E1122 07:44:58.246103 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="cinder-scheduler" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246124 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="cinder-scheduler" Nov 22 07:44:58 crc kubenswrapper[4858]: E1122 07:44:58.246166 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerName="init" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246174 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerName="init" Nov 22 07:44:58 crc kubenswrapper[4858]: E1122 07:44:58.246186 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerName="dnsmasq-dns" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246197 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerName="dnsmasq-dns" Nov 22 07:44:58 crc kubenswrapper[4858]: E1122 07:44:58.246218 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="probe" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246226 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="probe" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246476 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="cinder-scheduler" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246502 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" containerName="probe" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.246518 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf4098e-38a9-4ccf-bd60-7eccc9c294b0" containerName="dnsmasq-dns" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.247895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.249889 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.259244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.378365 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vf6h\" (UniqueName: \"kubernetes.io/projected/a10b7a00-765d-465e-b80e-e795da936e68-kube-api-access-5vf6h\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.378739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.378785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.378827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.378851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a10b7a00-765d-465e-b80e-e795da936e68-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.378869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-scripts\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.481443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.481524 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.481593 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.481619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a10b7a00-765d-465e-b80e-e795da936e68-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.482053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a10b7a00-765d-465e-b80e-e795da936e68-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.483492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-scripts\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.485735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vf6h\" (UniqueName: \"kubernetes.io/projected/a10b7a00-765d-465e-b80e-e795da936e68-kube-api-access-5vf6h\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.486469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.486583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.487842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.499903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-scripts\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.507745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vf6h\" (UniqueName: \"kubernetes.io/projected/a10b7a00-765d-465e-b80e-e795da936e68-kube-api-access-5vf6h\") pod \"cinder-scheduler-0\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " pod="openstack/cinder-scheduler-0" Nov 22 07:44:58 crc kubenswrapper[4858]: I1122 07:44:58.578095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:44:59 crc kubenswrapper[4858]: I1122 07:44:59.156674 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:44:59 crc kubenswrapper[4858]: I1122 07:44:59.553194 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b18261f-1731-4a66-a7ea-f87a901c8b82" path="/var/lib/kubelet/pods/7b18261f-1731-4a66-a7ea-f87a901c8b82/volumes" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.151563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a10b7a00-765d-465e-b80e-e795da936e68","Type":"ContainerStarted","Data":"9a6bd0f287f81f32a2ecd007606ab984ffaa840e52edd920e197cf1530362f85"} Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.152130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a10b7a00-765d-465e-b80e-e795da936e68","Type":"ContainerStarted","Data":"e15f9e87fe36673b614eb3863b21098a46ffc6d5d803ce00390940ef29cfa226"} Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.181577 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk"] Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.185042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.189431 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.189739 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.216592 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk"] Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.345545 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-secret-volume\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.345664 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-config-volume\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.345834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bd6\" (UniqueName: \"kubernetes.io/projected/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-kube-api-access-q8bd6\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.447528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-secret-volume\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.447620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-config-volume\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.447751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bd6\" (UniqueName: \"kubernetes.io/projected/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-kube-api-access-q8bd6\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.449752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-config-volume\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.457682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-secret-volume\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.489647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bd6\" (UniqueName: \"kubernetes.io/projected/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-kube-api-access-q8bd6\") pod \"collect-profiles-29396625-kjqjk\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:00 crc kubenswrapper[4858]: I1122 07:45:00.689561 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:01 crc kubenswrapper[4858]: I1122 07:45:01.332653 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk"] Nov 22 07:45:01 crc kubenswrapper[4858]: W1122 07:45:01.336625 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6afba2ba_4cf2_4450_aaac_d7dfe4d4da8c.slice/crio-793ec1c742c2de24216ae797966d6d9184336fcab2ab9e1f79ac8949fbe34e6f WatchSource:0}: Error finding container 793ec1c742c2de24216ae797966d6d9184336fcab2ab9e1f79ac8949fbe34e6f: Status 404 returned error can't find the container with id 793ec1c742c2de24216ae797966d6d9184336fcab2ab9e1f79ac8949fbe34e6f Nov 22 07:45:01 crc kubenswrapper[4858]: I1122 07:45:01.963661 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:45:02 crc kubenswrapper[4858]: I1122 07:45:02.178260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a10b7a00-765d-465e-b80e-e795da936e68","Type":"ContainerStarted","Data":"a264ec2d1761e844139d64f8cfd921295756c1e25bdc2ca727b8eecd6b023c10"} Nov 22 07:45:02 crc kubenswrapper[4858]: I1122 07:45:02.180767 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" event={"ID":"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c","Type":"ContainerStarted","Data":"4784e527139c0194e9c1959438eb39267fdc06b2e335446f473a9c52b341697d"} Nov 22 07:45:02 crc kubenswrapper[4858]: I1122 07:45:02.180841 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" event={"ID":"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c","Type":"ContainerStarted","Data":"793ec1c742c2de24216ae797966d6d9184336fcab2ab9e1f79ac8949fbe34e6f"} Nov 22 07:45:02 crc kubenswrapper[4858]: I1122 07:45:02.238279 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.238256533 podStartE2EDuration="4.238256533s" podCreationTimestamp="2025-11-22 07:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:02.216220207 +0000 UTC m=+2064.057643233" watchObservedRunningTime="2025-11-22 07:45:02.238256533 +0000 UTC m=+2064.079679539" Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.197092 4858 generic.go:334] "Generic (PLEG): container finished" podID="6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" containerID="4784e527139c0194e9c1959438eb39267fdc06b2e335446f473a9c52b341697d" exitCode=0 Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.197193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" event={"ID":"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c","Type":"ContainerDied","Data":"4784e527139c0194e9c1959438eb39267fdc06b2e335446f473a9c52b341697d"} Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.579212 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.725060 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.725511 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-central-agent" containerID="cri-o://9d59335978a6458bef85f0af179b1af2ea8edd604a6e2cf0b1d9da4d96d94fd1" gracePeriod=30 Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.725620 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="proxy-httpd" containerID="cri-o://1a91d2cfa96fa6edb7decc0c115f85dcf4cbeb3ff22a2ae3f3f985a415c238d5" gracePeriod=30 Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.725672 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="sg-core" containerID="cri-o://494f0861d9b62138516c7826cd32b91f4960e96d7f753c57935abef9ea29daea" gracePeriod=30 Nov 22 07:45:03 crc kubenswrapper[4858]: I1122 07:45:03.725692 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-notification-agent" containerID="cri-o://bba2fab9280d077598992e8924894db511ebabc97507b323542ae8516afb6e96" gracePeriod=30 Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.216666 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerID="494f0861d9b62138516c7826cd32b91f4960e96d7f753c57935abef9ea29daea" exitCode=2 Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.218152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerDied","Data":"494f0861d9b62138516c7826cd32b91f4960e96d7f753c57935abef9ea29daea"} Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.669934 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.760656 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-secret-volume\") pod \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.761014 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8bd6\" (UniqueName: \"kubernetes.io/projected/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-kube-api-access-q8bd6\") pod \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.761046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-config-volume\") pod \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\" (UID: \"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c\") " Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.762123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-config-volume" (OuterVolumeSpecName: "config-volume") pod "6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" (UID: "6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.770040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" (UID: "6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.770081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-kube-api-access-q8bd6" (OuterVolumeSpecName: "kube-api-access-q8bd6") pod "6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" (UID: "6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c"). InnerVolumeSpecName "kube-api-access-q8bd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.863243 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.863300 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8bd6\" (UniqueName: \"kubernetes.io/projected/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-kube-api-access-q8bd6\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4858]: I1122 07:45:04.863309 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.235650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" event={"ID":"6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c","Type":"ContainerDied","Data":"793ec1c742c2de24216ae797966d6d9184336fcab2ab9e1f79ac8949fbe34e6f"} Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.236134 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793ec1c742c2de24216ae797966d6d9184336fcab2ab9e1f79ac8949fbe34e6f" Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.235681 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk" Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.240481 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerID="1a91d2cfa96fa6edb7decc0c115f85dcf4cbeb3ff22a2ae3f3f985a415c238d5" exitCode=0 Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.240547 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerID="9d59335978a6458bef85f0af179b1af2ea8edd604a6e2cf0b1d9da4d96d94fd1" exitCode=0 Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.240575 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerDied","Data":"1a91d2cfa96fa6edb7decc0c115f85dcf4cbeb3ff22a2ae3f3f985a415c238d5"} Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.240731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerDied","Data":"9d59335978a6458bef85f0af179b1af2ea8edd604a6e2cf0b1d9da4d96d94fd1"} Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.783853 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h"] Nov 22 07:45:05 crc kubenswrapper[4858]: I1122 07:45:05.796780 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-6456h"] Nov 22 07:45:06 crc kubenswrapper[4858]: I1122 07:45:06.262889 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerID="bba2fab9280d077598992e8924894db511ebabc97507b323542ae8516afb6e96" exitCode=0 Nov 22 07:45:06 crc kubenswrapper[4858]: I1122 07:45:06.262975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerDied","Data":"bba2fab9280d077598992e8924894db511ebabc97507b323542ae8516afb6e96"} Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.188006 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.282789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a492265-db6d-4f46-a344-b4ede2abf5bc","Type":"ContainerDied","Data":"896be8f2ae2d353b56905370a8c82bddfa38fe53c50fe1a7594405ae6218e506"} Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.282872 4858 scope.go:117] "RemoveContainer" containerID="1a91d2cfa96fa6edb7decc0c115f85dcf4cbeb3ff22a2ae3f3f985a415c238d5" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.282912 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.320497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-combined-ca-bundle\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.320677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-run-httpd\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.320720 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-config-data\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.320814 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-log-httpd\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.320933 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75jvh\" (UniqueName: \"kubernetes.io/projected/2a492265-db6d-4f46-a344-b4ede2abf5bc-kube-api-access-75jvh\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.320966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-scripts\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.321001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-sg-core-conf-yaml\") pod \"2a492265-db6d-4f46-a344-b4ede2abf5bc\" (UID: \"2a492265-db6d-4f46-a344-b4ede2abf5bc\") " Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.321734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.322111 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.322775 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.322805 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a492265-db6d-4f46-a344-b4ede2abf5bc-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.334854 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a492265-db6d-4f46-a344-b4ede2abf5bc-kube-api-access-75jvh" (OuterVolumeSpecName: "kube-api-access-75jvh") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "kube-api-access-75jvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.334946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-scripts" (OuterVolumeSpecName: "scripts") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.362241 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.405773 4858 scope.go:117] "RemoveContainer" containerID="494f0861d9b62138516c7826cd32b91f4960e96d7f753c57935abef9ea29daea" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.420011 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.426351 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75jvh\" (UniqueName: \"kubernetes.io/projected/2a492265-db6d-4f46-a344-b4ede2abf5bc-kube-api-access-75jvh\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.426395 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.426423 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.426437 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.472995 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-config-data" (OuterVolumeSpecName: "config-data") pod "2a492265-db6d-4f46-a344-b4ede2abf5bc" (UID: "2a492265-db6d-4f46-a344-b4ede2abf5bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.528474 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a492265-db6d-4f46-a344-b4ede2abf5bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.550898 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb9f615-dc32-4f01-884b-db24dfb05c34" path="/var/lib/kubelet/pods/7cb9f615-dc32-4f01-884b-db24dfb05c34/volumes" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.562949 4858 scope.go:117] "RemoveContainer" containerID="bba2fab9280d077598992e8924894db511ebabc97507b323542ae8516afb6e96" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.619688 4858 scope.go:117] "RemoveContainer" containerID="9d59335978a6458bef85f0af179b1af2ea8edd604a6e2cf0b1d9da4d96d94fd1" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.626252 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.663314 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.677810 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:07 crc kubenswrapper[4858]: E1122 07:45:07.678417 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" containerName="collect-profiles" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678432 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" containerName="collect-profiles" Nov 22 07:45:07 crc kubenswrapper[4858]: E1122 07:45:07.678445 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="proxy-httpd" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678451 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="proxy-httpd" Nov 22 07:45:07 crc kubenswrapper[4858]: E1122 07:45:07.678465 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="sg-core" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678471 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="sg-core" Nov 22 07:45:07 crc kubenswrapper[4858]: E1122 07:45:07.678494 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-central-agent" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678501 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-central-agent" Nov 22 07:45:07 crc kubenswrapper[4858]: E1122 07:45:07.678542 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-notification-agent" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678548 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-notification-agent" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678731 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-notification-agent" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678749 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="proxy-httpd" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678762 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" containerName="collect-profiles" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678781 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="ceilometer-central-agent" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.678790 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" containerName="sg-core" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.681289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.686084 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.686393 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.693514 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-scripts\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838491 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4zx\" (UniqueName: \"kubernetes.io/projected/9e44b261-34ba-4afd-b725-4da3d5eadf5f-kube-api-access-wk4zx\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-log-httpd\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-config-data\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.838589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-run-httpd\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.940892 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-scripts\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.941028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.941069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4zx\" (UniqueName: \"kubernetes.io/projected/9e44b261-34ba-4afd-b725-4da3d5eadf5f-kube-api-access-wk4zx\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.941102 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-log-httpd\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.941136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-config-data\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.941161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-run-httpd\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.941204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.942611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-log-httpd\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.942619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-run-httpd\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.947024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-scripts\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.947116 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.947403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.949814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-config-data\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:07 crc kubenswrapper[4858]: I1122 07:45:07.960649 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4zx\" (UniqueName: \"kubernetes.io/projected/9e44b261-34ba-4afd-b725-4da3d5eadf5f-kube-api-access-wk4zx\") pod \"ceilometer-0\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " pod="openstack/ceilometer-0" Nov 22 07:45:08 crc kubenswrapper[4858]: I1122 07:45:08.010078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:08 crc kubenswrapper[4858]: I1122 07:45:08.541869 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:09 crc kubenswrapper[4858]: I1122 07:45:09.325862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerStarted","Data":"43772a2f29cbacab97cc134ea0b6296e6cd7f6107b96be85e14879cfb8682244"} Nov 22 07:45:09 crc kubenswrapper[4858]: I1122 07:45:09.549652 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a492265-db6d-4f46-a344-b4ede2abf5bc" path="/var/lib/kubelet/pods/2a492265-db6d-4f46-a344-b4ede2abf5bc/volumes" Nov 22 07:45:10 crc kubenswrapper[4858]: I1122 07:45:10.095161 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 07:45:14 crc kubenswrapper[4858]: I1122 07:45:14.385397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerStarted","Data":"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82"} Nov 22 07:45:16 crc kubenswrapper[4858]: I1122 07:45:16.410257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerStarted","Data":"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c"} Nov 22 07:45:20 crc kubenswrapper[4858]: I1122 07:45:20.472545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerStarted","Data":"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c"} Nov 22 07:45:21 crc kubenswrapper[4858]: I1122 07:45:21.488771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerStarted","Data":"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160"} Nov 22 07:45:21 crc kubenswrapper[4858]: I1122 07:45:21.489793 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:45:21 crc kubenswrapper[4858]: I1122 07:45:21.524384 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.234860918 podStartE2EDuration="14.524359368s" podCreationTimestamp="2025-11-22 07:45:07 +0000 UTC" firstStartedPulling="2025-11-22 07:45:08.546987165 +0000 UTC m=+2070.388410171" lastFinishedPulling="2025-11-22 07:45:20.836485615 +0000 UTC m=+2082.677908621" observedRunningTime="2025-11-22 07:45:21.522772187 +0000 UTC m=+2083.364195183" watchObservedRunningTime="2025-11-22 07:45:21.524359368 +0000 UTC m=+2083.365782374" Nov 22 07:45:21 crc kubenswrapper[4858]: I1122 07:45:21.832603 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:23 crc kubenswrapper[4858]: I1122 07:45:23.512412 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-central-agent" containerID="cri-o://e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" gracePeriod=30 Nov 22 07:45:23 crc kubenswrapper[4858]: I1122 07:45:23.512499 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="sg-core" containerID="cri-o://f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" gracePeriod=30 Nov 22 07:45:23 crc kubenswrapper[4858]: I1122 07:45:23.512516 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-notification-agent" containerID="cri-o://0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" gracePeriod=30 Nov 22 07:45:23 crc kubenswrapper[4858]: I1122 07:45:23.512530 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="proxy-httpd" containerID="cri-o://be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" gracePeriod=30 Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.396477 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-combined-ca-bundle\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479576 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4zx\" (UniqueName: \"kubernetes.io/projected/9e44b261-34ba-4afd-b725-4da3d5eadf5f-kube-api-access-wk4zx\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-run-httpd\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479769 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-config-data\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-log-httpd\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-sg-core-conf-yaml\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.479981 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-scripts\") pod \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\" (UID: \"9e44b261-34ba-4afd-b725-4da3d5eadf5f\") " Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.481765 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.483207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.491656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-scripts" (OuterVolumeSpecName: "scripts") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.493738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e44b261-34ba-4afd-b725-4da3d5eadf5f-kube-api-access-wk4zx" (OuterVolumeSpecName: "kube-api-access-wk4zx") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "kube-api-access-wk4zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530112 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" exitCode=0 Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530151 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" exitCode=2 Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530159 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" exitCode=0 Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530278 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" exitCode=0 Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530258 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerDied","Data":"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160"} Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerDied","Data":"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c"} Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerDied","Data":"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c"} Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530472 4858 scope.go:117] "RemoveContainer" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerDied","Data":"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82"} Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.530685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e44b261-34ba-4afd-b725-4da3d5eadf5f","Type":"ContainerDied","Data":"43772a2f29cbacab97cc134ea0b6296e6cd7f6107b96be85e14879cfb8682244"} Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.537585 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.572817 4858 scope.go:117] "RemoveContainer" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.582484 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.582672 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e44b261-34ba-4afd-b725-4da3d5eadf5f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.582741 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.582974 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.583038 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4zx\" (UniqueName: \"kubernetes.io/projected/9e44b261-34ba-4afd-b725-4da3d5eadf5f-kube-api-access-wk4zx\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.596067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.607397 4858 scope.go:117] "RemoveContainer" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.609408 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-config-data" (OuterVolumeSpecName: "config-data") pod "9e44b261-34ba-4afd-b725-4da3d5eadf5f" (UID: "9e44b261-34ba-4afd-b725-4da3d5eadf5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.634203 4858 scope.go:117] "RemoveContainer" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.684794 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.686028 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e44b261-34ba-4afd-b725-4da3d5eadf5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.716798 4858 scope.go:117] "RemoveContainer" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.717739 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": container with ID starting with be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160 not found: ID does not exist" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.717784 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160"} err="failed to get container status \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": rpc error: code = NotFound desc = could not find container \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": container with ID starting with be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.717823 4858 scope.go:117] "RemoveContainer" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.718226 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": container with ID starting with f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c not found: ID does not exist" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.718259 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c"} err="failed to get container status \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": rpc error: code = NotFound desc = could not find container \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": container with ID starting with f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.718286 4858 scope.go:117] "RemoveContainer" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.718927 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": container with ID starting with 0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c not found: ID does not exist" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.718959 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c"} err="failed to get container status \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": rpc error: code = NotFound desc = could not find container \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": container with ID starting with 0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.718973 4858 scope.go:117] "RemoveContainer" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.719254 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": container with ID starting with e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82 not found: ID does not exist" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.719432 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82"} err="failed to get container status \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": rpc error: code = NotFound desc = could not find container \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": container with ID starting with e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.719453 4858 scope.go:117] "RemoveContainer" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.719959 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160"} err="failed to get container status \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": rpc error: code = NotFound desc = could not find container \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": container with ID starting with be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.719979 4858 scope.go:117] "RemoveContainer" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.720266 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c"} err="failed to get container status \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": rpc error: code = NotFound desc = could not find container \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": container with ID starting with f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.720282 4858 scope.go:117] "RemoveContainer" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.720581 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c"} err="failed to get container status \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": rpc error: code = NotFound desc = could not find container \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": container with ID starting with 0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.720598 4858 scope.go:117] "RemoveContainer" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.721789 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82"} err="failed to get container status \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": rpc error: code = NotFound desc = could not find container \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": container with ID starting with e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.721814 4858 scope.go:117] "RemoveContainer" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.722146 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160"} err="failed to get container status \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": rpc error: code = NotFound desc = could not find container \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": container with ID starting with be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.722170 4858 scope.go:117] "RemoveContainer" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.722423 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c"} err="failed to get container status \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": rpc error: code = NotFound desc = could not find container \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": container with ID starting with f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.722441 4858 scope.go:117] "RemoveContainer" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.723077 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c"} err="failed to get container status \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": rpc error: code = NotFound desc = could not find container \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": container with ID starting with 0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.723106 4858 scope.go:117] "RemoveContainer" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.723505 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82"} err="failed to get container status \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": rpc error: code = NotFound desc = could not find container \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": container with ID starting with e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.723529 4858 scope.go:117] "RemoveContainer" containerID="be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.723823 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160"} err="failed to get container status \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": rpc error: code = NotFound desc = could not find container \"be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160\": container with ID starting with be4104edb625c78b02fd68a1900f9df3bf8108faf200921e38d4a1c6f2422160 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.723853 4858 scope.go:117] "RemoveContainer" containerID="f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.724219 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c"} err="failed to get container status \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": rpc error: code = NotFound desc = could not find container \"f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c\": container with ID starting with f58439671a74aea1453543f4d800efb4cf7dd30825f45a11bb6da21c2bcfd61c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.724439 4858 scope.go:117] "RemoveContainer" containerID="0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.724891 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c"} err="failed to get container status \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": rpc error: code = NotFound desc = could not find container \"0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c\": container with ID starting with 0c6a048a379a5788e56f0255c442e5ea8725e03be81db4300ddd209f5536bc3c not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.724923 4858 scope.go:117] "RemoveContainer" containerID="e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.725229 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82"} err="failed to get container status \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": rpc error: code = NotFound desc = could not find container \"e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82\": container with ID starting with e4de08952361067720b0e98478b1a07f784df8b2ae5fd067516c901de54a0f82 not found: ID does not exist" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.879437 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.902200 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.919186 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.920059 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-central-agent" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.920200 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-central-agent" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.920285 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="proxy-httpd" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.920384 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="proxy-httpd" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.920468 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-notification-agent" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.920543 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-notification-agent" Nov 22 07:45:24 crc kubenswrapper[4858]: E1122 07:45:24.920648 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="sg-core" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.920730 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="sg-core" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.921099 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="sg-core" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.921211 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-central-agent" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.921309 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="ceilometer-notification-agent" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.921416 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" containerName="proxy-httpd" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.923645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.927600 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.928602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:45:24 crc kubenswrapper[4858]: I1122 07:45:24.931642 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.097343 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-log-httpd\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.098067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-config-data\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.098245 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-run-httpd\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.098362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.098437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.098554 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-scripts\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.098663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s542\" (UniqueName: \"kubernetes.io/projected/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-kube-api-access-8s542\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.200467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-config-data\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.201056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-run-httpd\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.201357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.201619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.201984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-scripts\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.202308 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s542\" (UniqueName: \"kubernetes.io/projected/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-kube-api-access-8s542\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.202701 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-log-httpd\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.203948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-log-httpd\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.205469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-run-httpd\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.224208 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.224725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.224859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-config-data\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.225898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-scripts\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.228817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s542\" (UniqueName: \"kubernetes.io/projected/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-kube-api-access-8s542\") pod \"ceilometer-0\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.253301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.477082 4858 scope.go:117] "RemoveContainer" containerID="8414a7ddb1a889f4c7b1768708a340efe0869c69e66f4e76dbef7f463c63e033" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.570542 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e44b261-34ba-4afd-b725-4da3d5eadf5f" path="/var/lib/kubelet/pods/9e44b261-34ba-4afd-b725-4da3d5eadf5f/volumes" Nov 22 07:45:25 crc kubenswrapper[4858]: I1122 07:45:25.608222 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:25 crc kubenswrapper[4858]: W1122 07:45:25.634402 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7921f6c9_3c9a_4ff1_b59d_67f5eceb5b31.slice/crio-6ad30c599cf72f9545cb888f9ea7ecb806248241e809dc4dbc310a912e711dd4 WatchSource:0}: Error finding container 6ad30c599cf72f9545cb888f9ea7ecb806248241e809dc4dbc310a912e711dd4: Status 404 returned error can't find the container with id 6ad30c599cf72f9545cb888f9ea7ecb806248241e809dc4dbc310a912e711dd4 Nov 22 07:45:26 crc kubenswrapper[4858]: I1122 07:45:26.582818 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerStarted","Data":"6ad30c599cf72f9545cb888f9ea7ecb806248241e809dc4dbc310a912e711dd4"} Nov 22 07:45:28 crc kubenswrapper[4858]: I1122 07:45:28.606447 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerStarted","Data":"deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2"} Nov 22 07:45:30 crc kubenswrapper[4858]: I1122 07:45:30.626913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerStarted","Data":"3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a"} Nov 22 07:45:33 crc kubenswrapper[4858]: I1122 07:45:33.659126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerStarted","Data":"b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983"} Nov 22 07:45:36 crc kubenswrapper[4858]: I1122 07:45:36.698484 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerStarted","Data":"3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5"} Nov 22 07:45:37 crc kubenswrapper[4858]: I1122 07:45:37.707855 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:45:37 crc kubenswrapper[4858]: I1122 07:45:37.748425 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.003152795 podStartE2EDuration="13.748403502s" podCreationTimestamp="2025-11-22 07:45:24 +0000 UTC" firstStartedPulling="2025-11-22 07:45:25.649337732 +0000 UTC m=+2087.490760738" lastFinishedPulling="2025-11-22 07:45:36.394588439 +0000 UTC m=+2098.236011445" observedRunningTime="2025-11-22 07:45:37.730105825 +0000 UTC m=+2099.571528831" watchObservedRunningTime="2025-11-22 07:45:37.748403502 +0000 UTC m=+2099.589826498" Nov 22 07:45:41 crc kubenswrapper[4858]: I1122 07:45:41.752251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-djszx" event={"ID":"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647","Type":"ContainerDied","Data":"aa40f6ffa3b5047db31ed930a0581a3ab393038f8637f6aa84f0906dfaa6ab25"} Nov 22 07:45:41 crc kubenswrapper[4858]: I1122 07:45:41.752592 4858 generic.go:334] "Generic (PLEG): container finished" podID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" containerID="aa40f6ffa3b5047db31ed930a0581a3ab393038f8637f6aa84f0906dfaa6ab25" exitCode=0 Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.223476 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-djszx" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.281952 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-config-data\") pod \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.282008 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-db-sync-config-data\") pod \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.282116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmmk4\" (UniqueName: \"kubernetes.io/projected/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-kube-api-access-kmmk4\") pod \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.282173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-combined-ca-bundle\") pod \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\" (UID: \"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647\") " Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.287741 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" (UID: "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.290412 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-kube-api-access-kmmk4" (OuterVolumeSpecName: "kube-api-access-kmmk4") pod "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" (UID: "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647"). InnerVolumeSpecName "kube-api-access-kmmk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.326293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" (UID: "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.338442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-config-data" (OuterVolumeSpecName: "config-data") pod "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" (UID: "bb4885ab-de3a-4ccf-bfd4-a702a3b9d647"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.384113 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.384153 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.384164 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.384178 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmmk4\" (UniqueName: \"kubernetes.io/projected/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647-kube-api-access-kmmk4\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.774519 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-djszx" event={"ID":"bb4885ab-de3a-4ccf-bfd4-a702a3b9d647","Type":"ContainerDied","Data":"151d5e5038943d31959d2d9be06e4667f0c29455494beedef03b1b2bd41e70f5"} Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.774575 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="151d5e5038943d31959d2d9be06e4667f0c29455494beedef03b1b2bd41e70f5" Nov 22 07:45:43 crc kubenswrapper[4858]: I1122 07:45:43.774650 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-djszx" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.416784 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-655dc495c7-fxvzj"] Nov 22 07:45:44 crc kubenswrapper[4858]: E1122 07:45:44.421774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" containerName="glance-db-sync" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.421895 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" containerName="glance-db-sync" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.422248 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" containerName="glance-db-sync" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.423730 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.433531 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-655dc495c7-fxvzj"] Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.504149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45558\" (UniqueName: \"kubernetes.io/projected/2fce793a-2443-44d1-92ff-da5191de627e-kube-api-access-45558\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.504312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-config\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.504383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-swift-storage-0\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.504419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-sb\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.504452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-svc\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.504487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-nb\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.606655 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45558\" (UniqueName: \"kubernetes.io/projected/2fce793a-2443-44d1-92ff-da5191de627e-kube-api-access-45558\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.606780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-config\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.606831 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-swift-storage-0\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.606858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-sb\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.606912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-svc\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.606931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-nb\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.608011 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-nb\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.608035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-swift-storage-0\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.608039 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-sb\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.608266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-config\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.608308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-svc\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.639277 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45558\" (UniqueName: \"kubernetes.io/projected/2fce793a-2443-44d1-92ff-da5191de627e-kube-api-access-45558\") pod \"dnsmasq-dns-655dc495c7-fxvzj\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:44 crc kubenswrapper[4858]: I1122 07:45:44.752069 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.267809 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.270068 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.276005 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.276427 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.276751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pk5hd" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.293191 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.312255 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.312338 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.326398 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-655dc495c7-fxvzj"] Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424493 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-logs\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424553 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxpm\" (UniqueName: \"kubernetes.io/projected/76e9ca71-069c-40e8-90a8-8e4354c2cad0-kube-api-access-vrxpm\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424669 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-scripts\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-config-data\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424778 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.424822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.526642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-logs\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.526704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.526737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxpm\" (UniqueName: \"kubernetes.io/projected/76e9ca71-069c-40e8-90a8-8e4354c2cad0-kube-api-access-vrxpm\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.526823 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-scripts\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.527365 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-logs\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.527684 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.527817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-config-data\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.527915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.527984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.528416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.532250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-scripts\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.533224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-config-data\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.534742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.575670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxpm\" (UniqueName: \"kubernetes.io/projected/76e9ca71-069c-40e8-90a8-8e4354c2cad0-kube-api-access-vrxpm\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.588783 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.610733 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.620739 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.626673 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.630083 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.660954 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733069 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-logs\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733353 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733378 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wts78\" (UniqueName: \"kubernetes.io/projected/ca95145f-8669-4804-b2fa-e98f8e88a137-kube-api-access-wts78\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.733455 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.807939 4858 generic.go:334] "Generic (PLEG): container finished" podID="2fce793a-2443-44d1-92ff-da5191de627e" containerID="21af0fbe578e801e40dfde3b6da14838bf88faca14be620f98094fdce9858804" exitCode=0 Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.808279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" event={"ID":"2fce793a-2443-44d1-92ff-da5191de627e","Type":"ContainerDied","Data":"21af0fbe578e801e40dfde3b6da14838bf88faca14be620f98094fdce9858804"} Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.808402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" event={"ID":"2fce793a-2443-44d1-92ff-da5191de627e","Type":"ContainerStarted","Data":"dc11ec8c0c14a494db8cc8dd1d31f504c3996bd31b1d98b77756be289493f8ca"} Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.834935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.834982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.835012 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-logs\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.835062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.835123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.835149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wts78\" (UniqueName: \"kubernetes.io/projected/ca95145f-8669-4804-b2fa-e98f8e88a137-kube-api-access-wts78\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.835245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.836026 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.836204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-logs\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.836954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.842552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.842765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.844356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.861752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wts78\" (UniqueName: \"kubernetes.io/projected/ca95145f-8669-4804-b2fa-e98f8e88a137-kube-api-access-wts78\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:45 crc kubenswrapper[4858]: I1122 07:45:45.886684 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.066016 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.300523 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.647670 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.699215 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:46 crc kubenswrapper[4858]: W1122 07:45:46.729024 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca95145f_8669_4804_b2fa_e98f8e88a137.slice/crio-fa867162047db6f648177040d24bf73801bd20edcfd21b0bfcf4f1a7204b8dee WatchSource:0}: Error finding container fa867162047db6f648177040d24bf73801bd20edcfd21b0bfcf4f1a7204b8dee: Status 404 returned error can't find the container with id fa867162047db6f648177040d24bf73801bd20edcfd21b0bfcf4f1a7204b8dee Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.827679 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca95145f-8669-4804-b2fa-e98f8e88a137","Type":"ContainerStarted","Data":"fa867162047db6f648177040d24bf73801bd20edcfd21b0bfcf4f1a7204b8dee"} Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.839533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" event={"ID":"2fce793a-2443-44d1-92ff-da5191de627e","Type":"ContainerStarted","Data":"7b51904963b3fb723d92d0062ca3149a1f3ed4657bb8a888c8f9efadc4d7263a"} Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.839775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.841901 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76e9ca71-069c-40e8-90a8-8e4354c2cad0","Type":"ContainerStarted","Data":"84a3cf475c0991fa9917bb9c0edc2b61021126e0155187f2e13755977cb21e00"} Nov 22 07:45:46 crc kubenswrapper[4858]: I1122 07:45:46.876529 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" podStartSLOduration=2.876504969 podStartE2EDuration="2.876504969s" podCreationTimestamp="2025-11-22 07:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:46.86278512 +0000 UTC m=+2108.704208126" watchObservedRunningTime="2025-11-22 07:45:46.876504969 +0000 UTC m=+2108.717927975" Nov 22 07:45:47 crc kubenswrapper[4858]: I1122 07:45:47.591281 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:47 crc kubenswrapper[4858]: I1122 07:45:47.876262 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76e9ca71-069c-40e8-90a8-8e4354c2cad0","Type":"ContainerStarted","Data":"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb"} Nov 22 07:45:47 crc kubenswrapper[4858]: I1122 07:45:47.886610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca95145f-8669-4804-b2fa-e98f8e88a137","Type":"ContainerStarted","Data":"0561791ccc7e8ba0f2c9f062f6448178f46579ad87e91749de625170c7ed88f9"} Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.898910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76e9ca71-069c-40e8-90a8-8e4354c2cad0","Type":"ContainerStarted","Data":"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25"} Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.899088 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-log" containerID="cri-o://b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb" gracePeriod=30 Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.899230 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-httpd" containerID="cri-o://a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25" gracePeriod=30 Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.902773 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca95145f-8669-4804-b2fa-e98f8e88a137","Type":"ContainerStarted","Data":"5815eb18798951b55ab6f0ad727f0912d09f4fd5d5639270135f1cb692de6afe"} Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.902963 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-log" containerID="cri-o://0561791ccc7e8ba0f2c9f062f6448178f46579ad87e91749de625170c7ed88f9" gracePeriod=30 Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.902992 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-httpd" containerID="cri-o://5815eb18798951b55ab6f0ad727f0912d09f4fd5d5639270135f1cb692de6afe" gracePeriod=30 Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.923800 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.923758794 podStartE2EDuration="4.923758794s" podCreationTimestamp="2025-11-22 07:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:48.921339557 +0000 UTC m=+2110.762762583" watchObservedRunningTime="2025-11-22 07:45:48.923758794 +0000 UTC m=+2110.765181800" Nov 22 07:45:48 crc kubenswrapper[4858]: I1122 07:45:48.952010 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.951989209 podStartE2EDuration="4.951989209s" podCreationTimestamp="2025-11-22 07:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:48.946434001 +0000 UTC m=+2110.787857017" watchObservedRunningTime="2025-11-22 07:45:48.951989209 +0000 UTC m=+2110.793412215" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.738657 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.846959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-scripts\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.847283 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrxpm\" (UniqueName: \"kubernetes.io/projected/76e9ca71-069c-40e8-90a8-8e4354c2cad0-kube-api-access-vrxpm\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.847442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-httpd-run\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.847594 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-combined-ca-bundle\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.847749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-logs\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.847915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-config-data\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.848028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\" (UID: \"76e9ca71-069c-40e8-90a8-8e4354c2cad0\") " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.857853 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.871676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-logs" (OuterVolumeSpecName: "logs") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.877507 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.884526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-scripts" (OuterVolumeSpecName: "scripts") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.895567 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e9ca71-069c-40e8-90a8-8e4354c2cad0-kube-api-access-vrxpm" (OuterVolumeSpecName: "kube-api-access-vrxpm") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "kube-api-access-vrxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.943528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.953628 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.953673 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.953684 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrxpm\" (UniqueName: \"kubernetes.io/projected/76e9ca71-069c-40e8-90a8-8e4354c2cad0-kube-api-access-vrxpm\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.953694 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.953703 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.953712 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76e9ca71-069c-40e8-90a8-8e4354c2cad0-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.963358 4858 generic.go:334] "Generic (PLEG): container finished" podID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerID="a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25" exitCode=0 Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.963859 4858 generic.go:334] "Generic (PLEG): container finished" podID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerID="b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb" exitCode=143 Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.963983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76e9ca71-069c-40e8-90a8-8e4354c2cad0","Type":"ContainerDied","Data":"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25"} Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.964080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76e9ca71-069c-40e8-90a8-8e4354c2cad0","Type":"ContainerDied","Data":"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb"} Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.964146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76e9ca71-069c-40e8-90a8-8e4354c2cad0","Type":"ContainerDied","Data":"84a3cf475c0991fa9917bb9c0edc2b61021126e0155187f2e13755977cb21e00"} Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.964225 4858 scope.go:117] "RemoveContainer" containerID="a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.964454 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.969542 4858 generic.go:334] "Generic (PLEG): container finished" podID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerID="5815eb18798951b55ab6f0ad727f0912d09f4fd5d5639270135f1cb692de6afe" exitCode=0 Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.969578 4858 generic.go:334] "Generic (PLEG): container finished" podID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerID="0561791ccc7e8ba0f2c9f062f6448178f46579ad87e91749de625170c7ed88f9" exitCode=143 Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.969606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca95145f-8669-4804-b2fa-e98f8e88a137","Type":"ContainerDied","Data":"5815eb18798951b55ab6f0ad727f0912d09f4fd5d5639270135f1cb692de6afe"} Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.969641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca95145f-8669-4804-b2fa-e98f8e88a137","Type":"ContainerDied","Data":"0561791ccc7e8ba0f2c9f062f6448178f46579ad87e91749de625170c7ed88f9"} Nov 22 07:45:49 crc kubenswrapper[4858]: I1122 07:45:49.995684 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.004188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-config-data" (OuterVolumeSpecName: "config-data") pod "76e9ca71-069c-40e8-90a8-8e4354c2cad0" (UID: "76e9ca71-069c-40e8-90a8-8e4354c2cad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.005761 4858 scope.go:117] "RemoveContainer" containerID="b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.061705 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e9ca71-069c-40e8-90a8-8e4354c2cad0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.061734 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.099656 4858 scope.go:117] "RemoveContainer" containerID="a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25" Nov 22 07:45:50 crc kubenswrapper[4858]: E1122 07:45:50.100299 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25\": container with ID starting with a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25 not found: ID does not exist" containerID="a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.100368 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25"} err="failed to get container status \"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25\": rpc error: code = NotFound desc = could not find container \"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25\": container with ID starting with a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25 not found: ID does not exist" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.100411 4858 scope.go:117] "RemoveContainer" containerID="b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb" Nov 22 07:45:50 crc kubenswrapper[4858]: E1122 07:45:50.100895 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb\": container with ID starting with b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb not found: ID does not exist" containerID="b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.100953 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb"} err="failed to get container status \"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb\": rpc error: code = NotFound desc = could not find container \"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb\": container with ID starting with b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb not found: ID does not exist" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.100991 4858 scope.go:117] "RemoveContainer" containerID="a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.101548 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25"} err="failed to get container status \"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25\": rpc error: code = NotFound desc = could not find container \"a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25\": container with ID starting with a48126d5ba209ed102948a8a482f6d646f7d08b01eac0e26c1339c81b28f5c25 not found: ID does not exist" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.101576 4858 scope.go:117] "RemoveContainer" containerID="b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.101903 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb"} err="failed to get container status \"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb\": rpc error: code = NotFound desc = could not find container \"b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb\": container with ID starting with b31f00e04b018b8ad311a6553930dcf9e13ecf6bcb1997d673e04f63b2e7a1eb not found: ID does not exist" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.322598 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.330738 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.378401 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:50 crc kubenswrapper[4858]: E1122 07:45:50.378957 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-log" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.378981 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-log" Nov 22 07:45:50 crc kubenswrapper[4858]: E1122 07:45:50.379010 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-httpd" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.379019 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-httpd" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.379295 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-log" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.379341 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" containerName="glance-httpd" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.380708 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.388953 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.389246 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.395046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.474669 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6m8n\" (UniqueName: \"kubernetes.io/projected/a4127577-b995-4dfb-95d8-e089acc50fc9-kube-api-access-f6m8n\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475166 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475199 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.475274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.511408 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.576016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-scripts\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.576455 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-httpd-run\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.576643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-config-data\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.576799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-combined-ca-bundle\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.576872 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.577023 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wts78\" (UniqueName: \"kubernetes.io/projected/ca95145f-8669-4804-b2fa-e98f8e88a137-kube-api-access-wts78\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.577190 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.577353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-logs\") pod \"ca95145f-8669-4804-b2fa-e98f8e88a137\" (UID: \"ca95145f-8669-4804-b2fa-e98f8e88a137\") " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.577922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6m8n\" (UniqueName: \"kubernetes.io/projected/a4127577-b995-4dfb-95d8-e089acc50fc9-kube-api-access-f6m8n\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.578913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.579067 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.580162 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.582122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.582809 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-logs" (OuterVolumeSpecName: "logs") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.583159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-logs\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.584614 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca95145f-8669-4804-b2fa-e98f8e88a137-kube-api-access-wts78" (OuterVolumeSpecName: "kube-api-access-wts78") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "kube-api-access-wts78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.585636 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-scripts" (OuterVolumeSpecName: "scripts") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.585886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.588401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.591178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-scripts\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.592069 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.597030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-config-data\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.601419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6m8n\" (UniqueName: \"kubernetes.io/projected/a4127577-b995-4dfb-95d8-e089acc50fc9-kube-api-access-f6m8n\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.628985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.642911 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.674241 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-config-data" (OuterVolumeSpecName: "config-data") pod "ca95145f-8669-4804-b2fa-e98f8e88a137" (UID: "ca95145f-8669-4804-b2fa-e98f8e88a137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.681010 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.681051 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca95145f-8669-4804-b2fa-e98f8e88a137-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.681060 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.681069 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.681079 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca95145f-8669-4804-b2fa-e98f8e88a137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.681092 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wts78\" (UniqueName: \"kubernetes.io/projected/ca95145f-8669-4804-b2fa-e98f8e88a137-kube-api-access-wts78\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.703499 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.782700 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.807560 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.995978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ca95145f-8669-4804-b2fa-e98f8e88a137","Type":"ContainerDied","Data":"fa867162047db6f648177040d24bf73801bd20edcfd21b0bfcf4f1a7204b8dee"} Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.996078 4858 scope.go:117] "RemoveContainer" containerID="5815eb18798951b55ab6f0ad727f0912d09f4fd5d5639270135f1cb692de6afe" Nov 22 07:45:50 crc kubenswrapper[4858]: I1122 07:45:50.996646 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.037395 4858 scope.go:117] "RemoveContainer" containerID="0561791ccc7e8ba0f2c9f062f6448178f46579ad87e91749de625170c7ed88f9" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.122841 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.140938 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.158781 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:51 crc kubenswrapper[4858]: E1122 07:45:51.159395 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-log" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.159422 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-log" Nov 22 07:45:51 crc kubenswrapper[4858]: E1122 07:45:51.159509 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-httpd" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.159521 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-httpd" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.159782 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-log" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.159803 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" containerName="glance-httpd" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.161423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.167978 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.168455 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.170932 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.200763 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr96z\" (UniqueName: \"kubernetes.io/projected/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-kube-api-access-kr96z\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.200978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.201014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.201114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.201168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.201368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.201521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.201598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr96z\" (UniqueName: \"kubernetes.io/projected/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-kube-api-access-kr96z\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.304565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.305277 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.305669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.305975 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.311761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.314908 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.315252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.323238 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.343906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr96z\" (UniqueName: \"kubernetes.io/projected/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-kube-api-access-kr96z\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.399635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.465023 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.482080 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.560185 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e9ca71-069c-40e8-90a8-8e4354c2cad0" path="/var/lib/kubelet/pods/76e9ca71-069c-40e8-90a8-8e4354c2cad0/volumes" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.561795 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca95145f-8669-4804-b2fa-e98f8e88a137" path="/var/lib/kubelet/pods/ca95145f-8669-4804-b2fa-e98f8e88a137/volumes" Nov 22 07:45:51 crc kubenswrapper[4858]: I1122 07:45:51.810467 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:52 crc kubenswrapper[4858]: I1122 07:45:52.018190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4127577-b995-4dfb-95d8-e089acc50fc9","Type":"ContainerStarted","Data":"daf9ab09ec867c074fe15fb94f7cd35fdee142beb125392596800f0345ce4901"} Nov 22 07:45:52 crc kubenswrapper[4858]: I1122 07:45:52.159645 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.032802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4127577-b995-4dfb-95d8-e089acc50fc9","Type":"ContainerStarted","Data":"51604b1dd7eceb22876c5f2824f93728dd6ccb3368e18bfb5bdbfd78f9ae8589"} Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.033427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4127577-b995-4dfb-95d8-e089acc50fc9","Type":"ContainerStarted","Data":"a0adda39f79e6c29822139189a3c320fe6ee86b411f22c91f3e5eaceb048c381"} Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.039279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91","Type":"ContainerStarted","Data":"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294"} Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.039361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91","Type":"ContainerStarted","Data":"2fd230794ba6cf0990cbe71ce406f485bffa4c76da3ed4308ebe6d00e14556a1"} Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.063254 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.063229924 podStartE2EDuration="3.063229924s" podCreationTimestamp="2025-11-22 07:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:53.062562093 +0000 UTC m=+2114.903985109" watchObservedRunningTime="2025-11-22 07:45:53.063229924 +0000 UTC m=+2114.904652930" Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.934376 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.935002 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-central-agent" containerID="cri-o://deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2" gracePeriod=30 Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.935112 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="sg-core" containerID="cri-o://b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983" gracePeriod=30 Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.935141 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-notification-agent" containerID="cri-o://3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a" gracePeriod=30 Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.935384 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="proxy-httpd" containerID="cri-o://3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5" gracePeriod=30 Nov 22 07:45:53 crc kubenswrapper[4858]: I1122 07:45:53.945218 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.171:3000/\": EOF" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.052237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91","Type":"ContainerStarted","Data":"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318"} Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.052591 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-log" containerID="cri-o://b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294" gracePeriod=30 Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.053046 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-httpd" containerID="cri-o://8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318" gracePeriod=30 Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.082019 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.081965635 podStartE2EDuration="3.081965635s" podCreationTimestamp="2025-11-22 07:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:54.07306784 +0000 UTC m=+2115.914490856" watchObservedRunningTime="2025-11-22 07:45:54.081965635 +0000 UTC m=+2115.923388651" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.684070 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.754477 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-combined-ca-bundle\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-internal-tls-certs\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-httpd-run\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791554 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr96z\" (UniqueName: \"kubernetes.io/projected/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-kube-api-access-kr96z\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-config-data\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791787 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-scripts\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.791862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-logs\") pod \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\" (UID: \"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91\") " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.793129 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-logs" (OuterVolumeSpecName: "logs") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.794347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.807737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-kube-api-access-kr96z" (OuterVolumeSpecName: "kube-api-access-kr96z") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "kube-api-access-kr96z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.807950 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.810748 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-scripts" (OuterVolumeSpecName: "scripts") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.832243 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f45f7cc5-gj9xc"] Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.832536 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerName="dnsmasq-dns" containerID="cri-o://29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44" gracePeriod=10 Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.864166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.876282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895187 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895231 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895245 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895260 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895275 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895306 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.895339 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr96z\" (UniqueName: \"kubernetes.io/projected/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-kube-api-access-kr96z\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.915500 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-config-data" (OuterVolumeSpecName: "config-data") pod "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" (UID: "b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.922830 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.997215 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:54 crc kubenswrapper[4858]: I1122 07:45:54.997579 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.085949 4858 generic.go:334] "Generic (PLEG): container finished" podID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerID="3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5" exitCode=0 Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.086002 4858 generic.go:334] "Generic (PLEG): container finished" podID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerID="b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983" exitCode=2 Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.086012 4858 generic.go:334] "Generic (PLEG): container finished" podID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerID="deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2" exitCode=0 Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.086128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerDied","Data":"3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5"} Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.086167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerDied","Data":"b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983"} Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.086195 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerDied","Data":"deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2"} Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092137 4858 generic.go:334] "Generic (PLEG): container finished" podID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerID="8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318" exitCode=0 Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092186 4858 generic.go:334] "Generic (PLEG): container finished" podID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerID="b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294" exitCode=143 Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91","Type":"ContainerDied","Data":"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318"} Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91","Type":"ContainerDied","Data":"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294"} Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91","Type":"ContainerDied","Data":"2fd230794ba6cf0990cbe71ce406f485bffa4c76da3ed4308ebe6d00e14556a1"} Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092307 4858 scope.go:117] "RemoveContainer" containerID="8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.092987 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.227117 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.232461 4858 scope.go:117] "RemoveContainer" containerID="b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.237333 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.258192 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.171:3000/\": dial tcp 10.217.0.171:3000: connect: connection refused" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.268461 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:55 crc kubenswrapper[4858]: E1122 07:45:55.268993 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-httpd" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.269018 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-httpd" Nov 22 07:45:55 crc kubenswrapper[4858]: E1122 07:45:55.269035 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-log" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.269043 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-log" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.269223 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-log" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.269241 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" containerName="glance-httpd" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.270493 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.273778 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.274649 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.278639 4858 scope.go:117] "RemoveContainer" containerID="8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318" Nov 22 07:45:55 crc kubenswrapper[4858]: E1122 07:45:55.279767 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318\": container with ID starting with 8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318 not found: ID does not exist" containerID="8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.279817 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318"} err="failed to get container status \"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318\": rpc error: code = NotFound desc = could not find container \"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318\": container with ID starting with 8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318 not found: ID does not exist" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.279854 4858 scope.go:117] "RemoveContainer" containerID="b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294" Nov 22 07:45:55 crc kubenswrapper[4858]: E1122 07:45:55.280549 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294\": container with ID starting with b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294 not found: ID does not exist" containerID="b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.280694 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294"} err="failed to get container status \"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294\": rpc error: code = NotFound desc = could not find container \"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294\": container with ID starting with b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294 not found: ID does not exist" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.280815 4858 scope.go:117] "RemoveContainer" containerID="8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.281366 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318"} err="failed to get container status \"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318\": rpc error: code = NotFound desc = could not find container \"8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318\": container with ID starting with 8318c8af2efe57331f3386fb471ffde285633cb064829c7481a20b0134f74318 not found: ID does not exist" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.281480 4858 scope.go:117] "RemoveContainer" containerID="b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.281762 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294"} err="failed to get container status \"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294\": rpc error: code = NotFound desc = could not find container \"b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294\": container with ID starting with b0b47b447844f46a279ee034eeaf7d5e47daf9ee7f21a44baa29387c66428294 not found: ID does not exist" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.285409 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.409680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.409750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.409790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.409843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.409892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wg45\" (UniqueName: \"kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.410111 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.410173 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.410227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-logs\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.511947 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-logs\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512410 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wg45\" (UniqueName: \"kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.512750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.513144 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.514851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-logs\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.515122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.521237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.521930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.523705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.524128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.534162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wg45\" (UniqueName: \"kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.551092 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91" path="/var/lib/kubelet/pods/b9118bd1-9e5c-4c8f-91b5-b54cfafd2b91/volumes" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.575185 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.613259 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.856652 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.927227 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-nb\") pod \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.927844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-config\") pod \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.927880 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh552\" (UniqueName: \"kubernetes.io/projected/2f3ebd41-7392-415b-8e54-f56644e0f6e3-kube-api-access-dh552\") pod \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.927934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-svc\") pod \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.928031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-swift-storage-0\") pod \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.928102 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-sb\") pod \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\" (UID: \"2f3ebd41-7392-415b-8e54-f56644e0f6e3\") " Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.934840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3ebd41-7392-415b-8e54-f56644e0f6e3-kube-api-access-dh552" (OuterVolumeSpecName: "kube-api-access-dh552") pod "2f3ebd41-7392-415b-8e54-f56644e0f6e3" (UID: "2f3ebd41-7392-415b-8e54-f56644e0f6e3"). InnerVolumeSpecName "kube-api-access-dh552". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.989752 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f3ebd41-7392-415b-8e54-f56644e0f6e3" (UID: "2f3ebd41-7392-415b-8e54-f56644e0f6e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.992853 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f3ebd41-7392-415b-8e54-f56644e0f6e3" (UID: "2f3ebd41-7392-415b-8e54-f56644e0f6e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:55 crc kubenswrapper[4858]: I1122 07:45:55.996813 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f3ebd41-7392-415b-8e54-f56644e0f6e3" (UID: "2f3ebd41-7392-415b-8e54-f56644e0f6e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.005910 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-config" (OuterVolumeSpecName: "config") pod "2f3ebd41-7392-415b-8e54-f56644e0f6e3" (UID: "2f3ebd41-7392-415b-8e54-f56644e0f6e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.025376 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2f3ebd41-7392-415b-8e54-f56644e0f6e3" (UID: "2f3ebd41-7392-415b-8e54-f56644e0f6e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.034730 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.034802 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh552\" (UniqueName: \"kubernetes.io/projected/2f3ebd41-7392-415b-8e54-f56644e0f6e3-kube-api-access-dh552\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.034814 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.034825 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.034833 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.034842 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f3ebd41-7392-415b-8e54-f56644e0f6e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.105807 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerID="29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44" exitCode=0 Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.105865 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" event={"ID":"2f3ebd41-7392-415b-8e54-f56644e0f6e3","Type":"ContainerDied","Data":"29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44"} Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.105923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" event={"ID":"2f3ebd41-7392-415b-8e54-f56644e0f6e3","Type":"ContainerDied","Data":"7bd5d7a3bfba924c4dc92e820a8ada9122469e1d9df0de5dd765a9b447b53047"} Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.105924 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f45f7cc5-gj9xc" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.106031 4858 scope.go:117] "RemoveContainer" containerID="29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.137390 4858 scope.go:117] "RemoveContainer" containerID="7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.195188 4858 scope.go:117] "RemoveContainer" containerID="29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44" Nov 22 07:45:56 crc kubenswrapper[4858]: E1122 07:45:56.195935 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44\": container with ID starting with 29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44 not found: ID does not exist" containerID="29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.196005 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44"} err="failed to get container status \"29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44\": rpc error: code = NotFound desc = could not find container \"29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44\": container with ID starting with 29fb02e4dbcb72d38583e1f9b5bd2a9a70c78c13ed354e3c4fe4f57877754e44 not found: ID does not exist" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.196050 4858 scope.go:117] "RemoveContainer" containerID="7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0" Nov 22 07:45:56 crc kubenswrapper[4858]: E1122 07:45:56.197609 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0\": container with ID starting with 7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0 not found: ID does not exist" containerID="7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.197658 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0"} err="failed to get container status \"7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0\": rpc error: code = NotFound desc = could not find container \"7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0\": container with ID starting with 7952c56f118213385a5e3797fc3da977b67a007d826f7a521fdbc7c7d3cd8cd0 not found: ID does not exist" Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.200132 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f45f7cc5-gj9xc"] Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.212816 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69f45f7cc5-gj9xc"] Nov 22 07:45:56 crc kubenswrapper[4858]: I1122 07:45:56.324122 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:56 crc kubenswrapper[4858]: W1122 07:45:56.337794 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf987998_e4fb_4798_aaf5_6cb5f6a4670e.slice/crio-92f7f3b007329cff3b5db22cdc7e7400ca178fa3a31010f8bcba8b8406130863 WatchSource:0}: Error finding container 92f7f3b007329cff3b5db22cdc7e7400ca178fa3a31010f8bcba8b8406130863: Status 404 returned error can't find the container with id 92f7f3b007329cff3b5db22cdc7e7400ca178fa3a31010f8bcba8b8406130863 Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.134223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af987998-e4fb-4798-aaf5-6cb5f6a4670e","Type":"ContainerStarted","Data":"df6351eb07779190404e7510c779e428984d0ff82f2f65b8c045ff400d0f540b"} Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.134766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af987998-e4fb-4798-aaf5-6cb5f6a4670e","Type":"ContainerStarted","Data":"92f7f3b007329cff3b5db22cdc7e7400ca178fa3a31010f8bcba8b8406130863"} Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.574879 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" path="/var/lib/kubelet/pods/2f3ebd41-7392-415b-8e54-f56644e0f6e3/volumes" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.703517 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.781044 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-log-httpd\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.781192 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-run-httpd\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.781218 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-config-data\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.782166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.781308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-combined-ca-bundle\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.782248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-scripts\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.782299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-sg-core-conf-yaml\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.782378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s542\" (UniqueName: \"kubernetes.io/projected/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-kube-api-access-8s542\") pod \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\" (UID: \"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31\") " Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.783419 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.785934 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.786193 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-scripts" (OuterVolumeSpecName: "scripts") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.790623 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-kube-api-access-8s542" (OuterVolumeSpecName: "kube-api-access-8s542") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "kube-api-access-8s542". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.823994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.878543 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.886056 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.886097 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.886111 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.886119 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s542\" (UniqueName: \"kubernetes.io/projected/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-kube-api-access-8s542\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.886129 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.910270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-config-data" (OuterVolumeSpecName: "config-data") pod "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" (UID: "7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:57 crc kubenswrapper[4858]: I1122 07:45:57.988496 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.151559 4858 generic.go:334] "Generic (PLEG): container finished" podID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerID="3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a" exitCode=0 Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.151628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerDied","Data":"3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a"} Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.151650 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.152819 4858 scope.go:117] "RemoveContainer" containerID="3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.152699 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31","Type":"ContainerDied","Data":"6ad30c599cf72f9545cb888f9ea7ecb806248241e809dc4dbc310a912e711dd4"} Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.189580 4858 scope.go:117] "RemoveContainer" containerID="b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.223676 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.231679 4858 scope.go:117] "RemoveContainer" containerID="3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.242943 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.283581 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.284140 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-central-agent" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284166 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-central-agent" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.284199 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerName="init" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284214 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerName="init" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.284225 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerName="dnsmasq-dns" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284234 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerName="dnsmasq-dns" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.284250 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-notification-agent" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284261 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-notification-agent" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.284279 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="sg-core" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284288 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="sg-core" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.284303 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="proxy-httpd" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284312 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="proxy-httpd" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284580 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="proxy-httpd" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284606 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="sg-core" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284629 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-notification-agent" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284645 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3ebd41-7392-415b-8e54-f56644e0f6e3" containerName="dnsmasq-dns" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.284662 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" containerName="ceilometer-central-agent" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.287125 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.287530 4858 scope.go:117] "RemoveContainer" containerID="deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.303843 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.304731 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.318110 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.362877 4858 scope.go:117] "RemoveContainer" containerID="3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.367180 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5\": container with ID starting with 3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5 not found: ID does not exist" containerID="3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.367242 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5"} err="failed to get container status \"3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5\": rpc error: code = NotFound desc = could not find container \"3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5\": container with ID starting with 3313cfbbf16c23332cc2ac73f3cb2613bf56ddd8b5795d916f9dac7f381e84f5 not found: ID does not exist" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.367277 4858 scope.go:117] "RemoveContainer" containerID="b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.367682 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983\": container with ID starting with b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983 not found: ID does not exist" containerID="b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.367705 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983"} err="failed to get container status \"b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983\": rpc error: code = NotFound desc = could not find container \"b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983\": container with ID starting with b5fe379206d983b26a2cc9cb7b1ed05c8a508d18e08bba8d2953cc7b8e670983 not found: ID does not exist" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.367724 4858 scope.go:117] "RemoveContainer" containerID="3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.368119 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a\": container with ID starting with 3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a not found: ID does not exist" containerID="3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.368143 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a"} err="failed to get container status \"3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a\": rpc error: code = NotFound desc = could not find container \"3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a\": container with ID starting with 3b972c92c8e6cfd0d2f5b0276c380d1ddb49c7f7dd471affee616996eac97b8a not found: ID does not exist" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.368161 4858 scope.go:117] "RemoveContainer" containerID="deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2" Nov 22 07:45:58 crc kubenswrapper[4858]: E1122 07:45:58.368490 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2\": container with ID starting with deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2 not found: ID does not exist" containerID="deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.368518 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2"} err="failed to get container status \"deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2\": rpc error: code = NotFound desc = could not find container \"deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2\": container with ID starting with deef01b3b233066784a6b3a0b4b2c05063735f8944a999d73aa2a0cd5d0fc4c2 not found: ID does not exist" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397234 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-run-httpd\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7xm\" (UniqueName: \"kubernetes.io/projected/b3a29c4a-6358-49a7-a718-860d528bace8-kube-api-access-gb7xm\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-log-httpd\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397599 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-config-data\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.397647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-scripts\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499722 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-run-httpd\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7xm\" (UniqueName: \"kubernetes.io/projected/b3a29c4a-6358-49a7-a718-860d528bace8-kube-api-access-gb7xm\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-log-httpd\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-config-data\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.499966 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-scripts\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.501066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-run-httpd\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.502431 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-log-httpd\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.508641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-config-data\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.509441 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.509759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.515981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-scripts\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.525064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7xm\" (UniqueName: \"kubernetes.io/projected/b3a29c4a-6358-49a7-a718-860d528bace8-kube-api-access-gb7xm\") pod \"ceilometer-0\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " pod="openstack/ceilometer-0" Nov 22 07:45:58 crc kubenswrapper[4858]: I1122 07:45:58.640028 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:59 crc kubenswrapper[4858]: I1122 07:45:59.156157 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:59 crc kubenswrapper[4858]: I1122 07:45:59.170598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af987998-e4fb-4798-aaf5-6cb5f6a4670e","Type":"ContainerStarted","Data":"358f5eea1c33599a6ff9d0f49219f36c9849f142f1d83d32c74db35d272f5419"} Nov 22 07:45:59 crc kubenswrapper[4858]: I1122 07:45:59.204055 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.204022407 podStartE2EDuration="4.204022407s" podCreationTimestamp="2025-11-22 07:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:59.198055856 +0000 UTC m=+2121.039478862" watchObservedRunningTime="2025-11-22 07:45:59.204022407 +0000 UTC m=+2121.045445413" Nov 22 07:45:59 crc kubenswrapper[4858]: I1122 07:45:59.548619 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31" path="/var/lib/kubelet/pods/7921f6c9-3c9a-4ff1-b59d-67f5eceb5b31/volumes" Nov 22 07:46:00 crc kubenswrapper[4858]: I1122 07:46:00.185718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerStarted","Data":"851228033aaa69018ce7ea85beea059b381fa9cdf407d065dab12874a19c3dcf"} Nov 22 07:46:00 crc kubenswrapper[4858]: I1122 07:46:00.808858 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:46:00 crc kubenswrapper[4858]: I1122 07:46:00.808948 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:46:00 crc kubenswrapper[4858]: I1122 07:46:00.843430 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:46:00 crc kubenswrapper[4858]: I1122 07:46:00.854016 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:46:01 crc kubenswrapper[4858]: I1122 07:46:01.194947 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:46:01 crc kubenswrapper[4858]: I1122 07:46:01.195037 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:46:03 crc kubenswrapper[4858]: I1122 07:46:03.214946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerStarted","Data":"3cf2b02dd0a09d919e87c1565ad35b41d76bb4f1622b857afd01a53cc7768d36"} Nov 22 07:46:04 crc kubenswrapper[4858]: I1122 07:46:04.227349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerStarted","Data":"8fcda58f1e219b32437e7936af20605e3d6e3ab22eb3a47d3cfbb68a45a7bd17"} Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.243610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerStarted","Data":"8e7de24208d36ac990f6e8c573b1df188181e2722380cec00c91fe1c1a00979c"} Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.614768 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.614827 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.645163 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.662176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.663044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.663150 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:46:05 crc kubenswrapper[4858]: I1122 07:46:05.683511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:46:06 crc kubenswrapper[4858]: I1122 07:46:06.257999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerStarted","Data":"0022c440e22d65d553be7fd6837a2d1ad7fd90b26d3e2bd98d1dbed844c63633"} Nov 22 07:46:06 crc kubenswrapper[4858]: I1122 07:46:06.259504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:06 crc kubenswrapper[4858]: I1122 07:46:06.259526 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:06 crc kubenswrapper[4858]: I1122 07:46:06.298389 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.622683668 podStartE2EDuration="8.298363053s" podCreationTimestamp="2025-11-22 07:45:58 +0000 UTC" firstStartedPulling="2025-11-22 07:45:59.163579792 +0000 UTC m=+2121.005002798" lastFinishedPulling="2025-11-22 07:46:05.839259177 +0000 UTC m=+2127.680682183" observedRunningTime="2025-11-22 07:46:06.28827873 +0000 UTC m=+2128.129701746" watchObservedRunningTime="2025-11-22 07:46:06.298363053 +0000 UTC m=+2128.139786059" Nov 22 07:46:07 crc kubenswrapper[4858]: I1122 07:46:07.267272 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:46:08 crc kubenswrapper[4858]: I1122 07:46:08.278962 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" containerID="9fa0d715445d9cabd5993deddac4cf06600dfcb8a11d1fc5d81fa7dadce6684f" exitCode=0 Nov 22 07:46:08 crc kubenswrapper[4858]: I1122 07:46:08.279348 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:46:08 crc kubenswrapper[4858]: I1122 07:46:08.279358 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:46:08 crc kubenswrapper[4858]: I1122 07:46:08.280485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hrts7" event={"ID":"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08","Type":"ContainerDied","Data":"9fa0d715445d9cabd5993deddac4cf06600dfcb8a11d1fc5d81fa7dadce6684f"} Nov 22 07:46:08 crc kubenswrapper[4858]: I1122 07:46:08.484035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:08 crc kubenswrapper[4858]: I1122 07:46:08.554287 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.698418 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.849259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-config-data\") pod \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.849385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-658vw\" (UniqueName: \"kubernetes.io/projected/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-kube-api-access-658vw\") pod \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.849560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-scripts\") pod \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.849675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-combined-ca-bundle\") pod \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\" (UID: \"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08\") " Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.861755 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-scripts" (OuterVolumeSpecName: "scripts") pod "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" (UID: "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.861787 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-kube-api-access-658vw" (OuterVolumeSpecName: "kube-api-access-658vw") pod "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" (UID: "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08"). InnerVolumeSpecName "kube-api-access-658vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.883942 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-config-data" (OuterVolumeSpecName: "config-data") pod "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" (UID: "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.887248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" (UID: "f5712f6e-4ef2-4de1-9093-5fa00d6a1d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.951649 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-658vw\" (UniqueName: \"kubernetes.io/projected/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-kube-api-access-658vw\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.951691 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.951704 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:09 crc kubenswrapper[4858]: I1122 07:46:09.951713 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.302598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hrts7" event={"ID":"f5712f6e-4ef2-4de1-9093-5fa00d6a1d08","Type":"ContainerDied","Data":"0e6166f99e07936e5c0ed0616cf026cf3c8f5c380756d5a7c93a4cc0744170d3"} Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.302690 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6166f99e07936e5c0ed0616cf026cf3c8f5c380756d5a7c93a4cc0744170d3" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.302642 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hrts7" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.427021 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:10 crc kubenswrapper[4858]: E1122 07:46:10.427523 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" containerName="nova-cell0-conductor-db-sync" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.427541 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" containerName="nova-cell0-conductor-db-sync" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.427757 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" containerName="nova-cell0-conductor-db-sync" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.428624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.437224 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.468986 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fdmdn" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.473796 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.570938 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.571296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2n2b\" (UniqueName: \"kubernetes.io/projected/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-kube-api-access-w2n2b\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.571363 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.673470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.673538 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2n2b\" (UniqueName: \"kubernetes.io/projected/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-kube-api-access-w2n2b\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.673608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.681670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.682716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.694780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2n2b\" (UniqueName: \"kubernetes.io/projected/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-kube-api-access-w2n2b\") pod \"nova-cell0-conductor-0\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:10 crc kubenswrapper[4858]: I1122 07:46:10.794088 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:11 crc kubenswrapper[4858]: I1122 07:46:11.262481 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:11 crc kubenswrapper[4858]: I1122 07:46:11.316860 4858 generic.go:334] "Generic (PLEG): container finished" podID="12958341-df4b-4746-9621-04a44a4dafea" containerID="7424937b63e055893b5aae4bd3bd82c0b7a1388a0f97c8f17d97e275fc381ff3" exitCode=0 Nov 22 07:46:11 crc kubenswrapper[4858]: I1122 07:46:11.316943 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4c8pg" event={"ID":"12958341-df4b-4746-9621-04a44a4dafea","Type":"ContainerDied","Data":"7424937b63e055893b5aae4bd3bd82c0b7a1388a0f97c8f17d97e275fc381ff3"} Nov 22 07:46:11 crc kubenswrapper[4858]: I1122 07:46:11.319075 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29fdc78b-c84c-47ff-b0b4-a854e74f23d5","Type":"ContainerStarted","Data":"e3f2a2bd047ba3f96d8929103914b4130c0aa33eed9b157acdf361a92ebcf755"} Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.330855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29fdc78b-c84c-47ff-b0b4-a854e74f23d5","Type":"ContainerStarted","Data":"76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e"} Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.332663 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.355876 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.355848827 podStartE2EDuration="2.355848827s" podCreationTimestamp="2025-11-22 07:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:12.346177678 +0000 UTC m=+2134.187600694" watchObservedRunningTime="2025-11-22 07:46:12.355848827 +0000 UTC m=+2134.197271833" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.729511 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.818018 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-combined-ca-bundle\") pod \"12958341-df4b-4746-9621-04a44a4dafea\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.818974 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-config\") pod \"12958341-df4b-4746-9621-04a44a4dafea\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.819165 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9x9v\" (UniqueName: \"kubernetes.io/projected/12958341-df4b-4746-9621-04a44a4dafea-kube-api-access-x9x9v\") pod \"12958341-df4b-4746-9621-04a44a4dafea\" (UID: \"12958341-df4b-4746-9621-04a44a4dafea\") " Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.824703 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12958341-df4b-4746-9621-04a44a4dafea-kube-api-access-x9x9v" (OuterVolumeSpecName: "kube-api-access-x9x9v") pod "12958341-df4b-4746-9621-04a44a4dafea" (UID: "12958341-df4b-4746-9621-04a44a4dafea"). InnerVolumeSpecName "kube-api-access-x9x9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.849563 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-config" (OuterVolumeSpecName: "config") pod "12958341-df4b-4746-9621-04a44a4dafea" (UID: "12958341-df4b-4746-9621-04a44a4dafea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.857278 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12958341-df4b-4746-9621-04a44a4dafea" (UID: "12958341-df4b-4746-9621-04a44a4dafea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.922555 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9x9v\" (UniqueName: \"kubernetes.io/projected/12958341-df4b-4746-9621-04a44a4dafea-kube-api-access-x9x9v\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.922763 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:12 crc kubenswrapper[4858]: I1122 07:46:12.922872 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/12958341-df4b-4746-9621-04a44a4dafea-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.339299 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4c8pg" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.339283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4c8pg" event={"ID":"12958341-df4b-4746-9621-04a44a4dafea","Type":"ContainerDied","Data":"3fd346777c9f9fbeccf5f5ac7165fc3c1d38c06dc4e6623ea5d675635711af7d"} Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.339590 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd346777c9f9fbeccf5f5ac7165fc3c1d38c06dc4e6623ea5d675635711af7d" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.515134 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-797bbc649-j82sw"] Nov 22 07:46:13 crc kubenswrapper[4858]: E1122 07:46:13.516018 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12958341-df4b-4746-9621-04a44a4dafea" containerName="neutron-db-sync" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.516046 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="12958341-df4b-4746-9621-04a44a4dafea" containerName="neutron-db-sync" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.516310 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="12958341-df4b-4746-9621-04a44a4dafea" containerName="neutron-db-sync" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.517654 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.560538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-j82sw"] Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.613764 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cbb9bb55b-9l7r4"] Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.616995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.620730 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.620962 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d4vvf" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.621121 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.625222 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.641522 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.641673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.641763 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-svc\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.641810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-config\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.641903 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9zf\" (UniqueName: \"kubernetes.io/projected/4dd2e516-84ae-41bd-9fdb-16aa2040356b-kube-api-access-cf9zf\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.642036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.653004 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cbb9bb55b-9l7r4"] Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqfw\" (UniqueName: \"kubernetes.io/projected/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-kube-api-access-vfqfw\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743562 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-ovndb-tls-certs\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-svc\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743690 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-config\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743731 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-httpd-config\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743892 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9zf\" (UniqueName: \"kubernetes.io/projected/4dd2e516-84ae-41bd-9fdb-16aa2040356b-kube-api-access-cf9zf\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-combined-ca-bundle\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.743978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.744013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-config\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.744923 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.745102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-config\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.745102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.745300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.745469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-svc\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.764163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9zf\" (UniqueName: \"kubernetes.io/projected/4dd2e516-84ae-41bd-9fdb-16aa2040356b-kube-api-access-cf9zf\") pod \"dnsmasq-dns-797bbc649-j82sw\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.844726 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.845569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-ovndb-tls-certs\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.845637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-httpd-config\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.845677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-combined-ca-bundle\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.845709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-config\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.845737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqfw\" (UniqueName: \"kubernetes.io/projected/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-kube-api-access-vfqfw\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.851357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-combined-ca-bundle\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.852201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-httpd-config\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.853729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-config\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.864314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-ovndb-tls-certs\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.871725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqfw\" (UniqueName: \"kubernetes.io/projected/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-kube-api-access-vfqfw\") pod \"neutron-5cbb9bb55b-9l7r4\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:13 crc kubenswrapper[4858]: I1122 07:46:13.970886 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:14 crc kubenswrapper[4858]: I1122 07:46:14.355248 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-j82sw"] Nov 22 07:46:14 crc kubenswrapper[4858]: W1122 07:46:14.359939 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dd2e516_84ae_41bd_9fdb_16aa2040356b.slice/crio-92976e7f09f5cd61ed4baba5ad7bdc35364991c9141cef6c9572c96e234da774 WatchSource:0}: Error finding container 92976e7f09f5cd61ed4baba5ad7bdc35364991c9141cef6c9572c96e234da774: Status 404 returned error can't find the container with id 92976e7f09f5cd61ed4baba5ad7bdc35364991c9141cef6c9572c96e234da774 Nov 22 07:46:14 crc kubenswrapper[4858]: I1122 07:46:14.660028 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cbb9bb55b-9l7r4"] Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.312105 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.312798 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.366549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbb9bb55b-9l7r4" event={"ID":"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e","Type":"ContainerStarted","Data":"17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb"} Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.366640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbb9bb55b-9l7r4" event={"ID":"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e","Type":"ContainerStarted","Data":"c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600"} Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.366659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbb9bb55b-9l7r4" event={"ID":"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e","Type":"ContainerStarted","Data":"e6c9257233dd531f85cf86c9cd05ffcf681d9dfc9deaf1d1516d07f0fc1e1cd7"} Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.366745 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.369577 4858 generic.go:334] "Generic (PLEG): container finished" podID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerID="b8d598333ebe9b8f17d3277c98df424f1b884480ba90b3b71531396135fece8f" exitCode=0 Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.369639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-j82sw" event={"ID":"4dd2e516-84ae-41bd-9fdb-16aa2040356b","Type":"ContainerDied","Data":"b8d598333ebe9b8f17d3277c98df424f1b884480ba90b3b71531396135fece8f"} Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.369686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-j82sw" event={"ID":"4dd2e516-84ae-41bd-9fdb-16aa2040356b","Type":"ContainerStarted","Data":"92976e7f09f5cd61ed4baba5ad7bdc35364991c9141cef6c9572c96e234da774"} Nov 22 07:46:15 crc kubenswrapper[4858]: I1122 07:46:15.443518 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cbb9bb55b-9l7r4" podStartSLOduration=2.443494949 podStartE2EDuration="2.443494949s" podCreationTimestamp="2025-11-22 07:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:15.399204105 +0000 UTC m=+2137.240627111" watchObservedRunningTime="2025-11-22 07:46:15.443494949 +0000 UTC m=+2137.284917955" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.381373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-j82sw" event={"ID":"4dd2e516-84ae-41bd-9fdb-16aa2040356b","Type":"ContainerStarted","Data":"1d42cfb1f43238fcf3076ebf8b499aa6588df737fc4c961847d4fa0c324ab0d3"} Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.406643 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-797bbc649-j82sw" podStartSLOduration=3.406618219 podStartE2EDuration="3.406618219s" podCreationTimestamp="2025-11-22 07:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:16.402796976 +0000 UTC m=+2138.244219992" watchObservedRunningTime="2025-11-22 07:46:16.406618219 +0000 UTC m=+2138.248041225" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.523426 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56cfd7c4f7-gvswl"] Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.525498 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.529901 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.529901 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.538841 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56cfd7c4f7-gvswl"] Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.610877 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-ovndb-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.611002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8544d\" (UniqueName: \"kubernetes.io/projected/555cf9f2-a18e-4b84-b360-d03c7e0d0821-kube-api-access-8544d\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.611212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-combined-ca-bundle\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.611254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-public-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.611616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-internal-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.611896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-httpd-config\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.611945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-config\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-ovndb-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8544d\" (UniqueName: \"kubernetes.io/projected/555cf9f2-a18e-4b84-b360-d03c7e0d0821-kube-api-access-8544d\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714310 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-combined-ca-bundle\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-public-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-internal-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-httpd-config\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.714528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-config\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.739448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-combined-ca-bundle\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.741834 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-public-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.742077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-internal-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.742530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-httpd-config\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.742842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-ovndb-tls-certs\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.743162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8544d\" (UniqueName: \"kubernetes.io/projected/555cf9f2-a18e-4b84-b360-d03c7e0d0821-kube-api-access-8544d\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.743259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-config\") pod \"neutron-56cfd7c4f7-gvswl\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:16 crc kubenswrapper[4858]: I1122 07:46:16.849430 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:17 crc kubenswrapper[4858]: I1122 07:46:17.395021 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:17 crc kubenswrapper[4858]: I1122 07:46:17.523294 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56cfd7c4f7-gvswl"] Nov 22 07:46:17 crc kubenswrapper[4858]: W1122 07:46:17.525570 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555cf9f2_a18e_4b84_b360_d03c7e0d0821.slice/crio-3cf09d0702349f2485c683260afed37dc42a8928e4e5cd19678ca8afa92abd57 WatchSource:0}: Error finding container 3cf09d0702349f2485c683260afed37dc42a8928e4e5cd19678ca8afa92abd57: Status 404 returned error can't find the container with id 3cf09d0702349f2485c683260afed37dc42a8928e4e5cd19678ca8afa92abd57 Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.181350 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.182004 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e" gracePeriod=30 Nov 22 07:46:18 crc kubenswrapper[4858]: E1122 07:46:18.192607 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:46:18 crc kubenswrapper[4858]: E1122 07:46:18.196729 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:46:18 crc kubenswrapper[4858]: E1122 07:46:18.200456 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:46:18 crc kubenswrapper[4858]: E1122 07:46:18.200544 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" containerName="nova-cell0-conductor-conductor" Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.418039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56cfd7c4f7-gvswl" event={"ID":"555cf9f2-a18e-4b84-b360-d03c7e0d0821","Type":"ContainerStarted","Data":"74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55"} Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.418510 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.418534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56cfd7c4f7-gvswl" event={"ID":"555cf9f2-a18e-4b84-b360-d03c7e0d0821","Type":"ContainerStarted","Data":"721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a"} Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.418551 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56cfd7c4f7-gvswl" event={"ID":"555cf9f2-a18e-4b84-b360-d03c7e0d0821","Type":"ContainerStarted","Data":"3cf09d0702349f2485c683260afed37dc42a8928e4e5cd19678ca8afa92abd57"} Nov 22 07:46:18 crc kubenswrapper[4858]: I1122 07:46:18.449220 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56cfd7c4f7-gvswl" podStartSLOduration=2.44919002 podStartE2EDuration="2.44919002s" podCreationTimestamp="2025-11-22 07:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:18.439818118 +0000 UTC m=+2140.281241154" watchObservedRunningTime="2025-11-22 07:46:18.44919002 +0000 UTC m=+2140.290613026" Nov 22 07:46:19 crc kubenswrapper[4858]: I1122 07:46:19.442174 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:19 crc kubenswrapper[4858]: I1122 07:46:19.442603 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-central-agent" containerID="cri-o://3cf2b02dd0a09d919e87c1565ad35b41d76bb4f1622b857afd01a53cc7768d36" gracePeriod=30 Nov 22 07:46:19 crc kubenswrapper[4858]: I1122 07:46:19.444106 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="sg-core" containerID="cri-o://8e7de24208d36ac990f6e8c573b1df188181e2722380cec00c91fe1c1a00979c" gracePeriod=30 Nov 22 07:46:19 crc kubenswrapper[4858]: I1122 07:46:19.444312 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="proxy-httpd" containerID="cri-o://0022c440e22d65d553be7fd6837a2d1ad7fd90b26d3e2bd98d1dbed844c63633" gracePeriod=30 Nov 22 07:46:19 crc kubenswrapper[4858]: I1122 07:46:19.444406 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-notification-agent" containerID="cri-o://8fcda58f1e219b32437e7936af20605e3d6e3ab22eb3a47d3cfbb68a45a7bd17" gracePeriod=30 Nov 22 07:46:19 crc kubenswrapper[4858]: I1122 07:46:19.548148 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.178:3000/\": read tcp 10.217.0.2:55456->10.217.0.178:3000: read: connection reset by peer" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.444938 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a29c4a-6358-49a7-a718-860d528bace8" containerID="0022c440e22d65d553be7fd6837a2d1ad7fd90b26d3e2bd98d1dbed844c63633" exitCode=0 Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.445313 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a29c4a-6358-49a7-a718-860d528bace8" containerID="8e7de24208d36ac990f6e8c573b1df188181e2722380cec00c91fe1c1a00979c" exitCode=2 Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.445342 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a29c4a-6358-49a7-a718-860d528bace8" containerID="3cf2b02dd0a09d919e87c1565ad35b41d76bb4f1622b857afd01a53cc7768d36" exitCode=0 Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.445239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerDied","Data":"0022c440e22d65d553be7fd6837a2d1ad7fd90b26d3e2bd98d1dbed844c63633"} Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.445423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerDied","Data":"8e7de24208d36ac990f6e8c573b1df188181e2722380cec00c91fe1c1a00979c"} Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.445493 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerDied","Data":"3cf2b02dd0a09d919e87c1565ad35b41d76bb4f1622b857afd01a53cc7768d36"} Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.448080 4858 generic.go:334] "Generic (PLEG): container finished" podID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" containerID="76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e" exitCode=0 Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.449550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29fdc78b-c84c-47ff-b0b4-a854e74f23d5","Type":"ContainerDied","Data":"76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e"} Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.631397 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.719796 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-config-data\") pod \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.719935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-combined-ca-bundle\") pod \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.719986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2n2b\" (UniqueName: \"kubernetes.io/projected/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-kube-api-access-w2n2b\") pod \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\" (UID: \"29fdc78b-c84c-47ff-b0b4-a854e74f23d5\") " Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.728675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-kube-api-access-w2n2b" (OuterVolumeSpecName: "kube-api-access-w2n2b") pod "29fdc78b-c84c-47ff-b0b4-a854e74f23d5" (UID: "29fdc78b-c84c-47ff-b0b4-a854e74f23d5"). InnerVolumeSpecName "kube-api-access-w2n2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.753511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-config-data" (OuterVolumeSpecName: "config-data") pod "29fdc78b-c84c-47ff-b0b4-a854e74f23d5" (UID: "29fdc78b-c84c-47ff-b0b4-a854e74f23d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.755475 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29fdc78b-c84c-47ff-b0b4-a854e74f23d5" (UID: "29fdc78b-c84c-47ff-b0b4-a854e74f23d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.822303 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.822905 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2n2b\" (UniqueName: \"kubernetes.io/projected/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-kube-api-access-w2n2b\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:20 crc kubenswrapper[4858]: I1122 07:46:20.822922 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29fdc78b-c84c-47ff-b0b4-a854e74f23d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.463070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29fdc78b-c84c-47ff-b0b4-a854e74f23d5","Type":"ContainerDied","Data":"e3f2a2bd047ba3f96d8929103914b4130c0aa33eed9b157acdf361a92ebcf755"} Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.463587 4858 scope.go:117] "RemoveContainer" containerID="76c72df5ce98d723fdd7d0a146edf3e93c2e1ca0745710b6c4629345238ddc3e" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.463079 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.473823 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3a29c4a-6358-49a7-a718-860d528bace8" containerID="8fcda58f1e219b32437e7936af20605e3d6e3ab22eb3a47d3cfbb68a45a7bd17" exitCode=0 Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.473883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerDied","Data":"8fcda58f1e219b32437e7936af20605e3d6e3ab22eb3a47d3cfbb68a45a7bd17"} Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.520311 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.529981 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.573234 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" path="/var/lib/kubelet/pods/29fdc78b-c84c-47ff-b0b4-a854e74f23d5/volumes" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.573932 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:21 crc kubenswrapper[4858]: E1122 07:46:21.574350 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" containerName="nova-cell0-conductor-conductor" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.574374 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" containerName="nova-cell0-conductor-conductor" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.574642 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="29fdc78b-c84c-47ff-b0b4-a854e74f23d5" containerName="nova-cell0-conductor-conductor" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.576261 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.582336 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.583677 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fdmdn" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.600694 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.695540 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.748810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhnww\" (UniqueName: \"kubernetes.io/projected/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-kube-api-access-jhnww\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.748906 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.748997 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.850823 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-log-httpd\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.850988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-run-httpd\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-scripts\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb7xm\" (UniqueName: \"kubernetes.io/projected/b3a29c4a-6358-49a7-a718-860d528bace8-kube-api-access-gb7xm\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-combined-ca-bundle\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851156 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-sg-core-conf-yaml\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851222 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-config-data\") pod \"b3a29c4a-6358-49a7-a718-860d528bace8\" (UID: \"b3a29c4a-6358-49a7-a718-860d528bace8\") " Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhnww\" (UniqueName: \"kubernetes.io/projected/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-kube-api-access-jhnww\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851741 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.851961 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.852413 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.858110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-scripts" (OuterVolumeSpecName: "scripts") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.859468 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.859590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.860573 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a29c4a-6358-49a7-a718-860d528bace8-kube-api-access-gb7xm" (OuterVolumeSpecName: "kube-api-access-gb7xm") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "kube-api-access-gb7xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.873597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhnww\" (UniqueName: \"kubernetes.io/projected/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-kube-api-access-jhnww\") pod \"nova-cell0-conductor-0\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.886128 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.953752 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb7xm\" (UniqueName: \"kubernetes.io/projected/b3a29c4a-6358-49a7-a718-860d528bace8-kube-api-access-gb7xm\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.953796 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.953814 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3a29c4a-6358-49a7-a718-860d528bace8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.953825 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.957309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:21 crc kubenswrapper[4858]: I1122 07:46:21.980301 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-config-data" (OuterVolumeSpecName: "config-data") pod "b3a29c4a-6358-49a7-a718-860d528bace8" (UID: "b3a29c4a-6358-49a7-a718-860d528bace8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.011854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.055863 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.055908 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a29c4a-6358-49a7-a718-860d528bace8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.490811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3a29c4a-6358-49a7-a718-860d528bace8","Type":"ContainerDied","Data":"851228033aaa69018ce7ea85beea059b381fa9cdf407d065dab12874a19c3dcf"} Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.491148 4858 scope.go:117] "RemoveContainer" containerID="0022c440e22d65d553be7fd6837a2d1ad7fd90b26d3e2bd98d1dbed844c63633" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.491303 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.515539 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.522159 4858 scope.go:117] "RemoveContainer" containerID="8e7de24208d36ac990f6e8c573b1df188181e2722380cec00c91fe1c1a00979c" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.541654 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.551922 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.574121 4858 scope.go:117] "RemoveContainer" containerID="8fcda58f1e219b32437e7936af20605e3d6e3ab22eb3a47d3cfbb68a45a7bd17" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.600007 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:22 crc kubenswrapper[4858]: E1122 07:46:22.601074 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-notification-agent" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.601162 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-notification-agent" Nov 22 07:46:22 crc kubenswrapper[4858]: E1122 07:46:22.601259 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="proxy-httpd" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.601389 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="proxy-httpd" Nov 22 07:46:22 crc kubenswrapper[4858]: E1122 07:46:22.601476 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="sg-core" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.601532 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="sg-core" Nov 22 07:46:22 crc kubenswrapper[4858]: E1122 07:46:22.601606 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-central-agent" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.601662 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-central-agent" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.602112 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="proxy-httpd" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.602216 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-central-agent" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.602340 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="ceilometer-notification-agent" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.602449 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" containerName="sg-core" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.607417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.612614 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.612927 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.648774 4858 scope.go:117] "RemoveContainer" containerID="3cf2b02dd0a09d919e87c1565ad35b41d76bb4f1622b857afd01a53cc7768d36" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.655451 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-log-httpd\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-config-data\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvjq\" (UniqueName: \"kubernetes.io/projected/baf87756-8721-4a37-a84d-4990d3a41a35-kube-api-access-rmvjq\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-scripts\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-run-httpd\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.770833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873456 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-log-httpd\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-config-data\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmvjq\" (UniqueName: \"kubernetes.io/projected/baf87756-8721-4a37-a84d-4990d3a41a35-kube-api-access-rmvjq\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873966 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-scripts\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-run-httpd\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.873998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.874532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-log-httpd\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.874615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-run-httpd\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.880187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.883341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-scripts\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.883566 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.893444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmvjq\" (UniqueName: \"kubernetes.io/projected/baf87756-8721-4a37-a84d-4990d3a41a35-kube-api-access-rmvjq\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4858]: I1122 07:46:22.900155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-config-data\") pod \"ceilometer-0\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " pod="openstack/ceilometer-0" Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.090063 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.503812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738","Type":"ContainerStarted","Data":"41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792"} Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.504175 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738","Type":"ContainerStarted","Data":"82d230b26c73a572f6224eb5b8e5354607b94ca8a9099f0bea961c0506ee3291"} Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.504618 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.539202 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.539177504 podStartE2EDuration="2.539177504s" podCreationTimestamp="2025-11-22 07:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:23.532936343 +0000 UTC m=+2145.374359349" watchObservedRunningTime="2025-11-22 07:46:23.539177504 +0000 UTC m=+2145.380600510" Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.550746 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a29c4a-6358-49a7-a718-860d528bace8" path="/var/lib/kubelet/pods/b3a29c4a-6358-49a7-a718-860d528bace8/volumes" Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.593148 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.610272 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.847649 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.917477 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-655dc495c7-fxvzj"] Nov 22 07:46:23 crc kubenswrapper[4858]: I1122 07:46:23.917785 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="dnsmasq-dns" containerID="cri-o://7b51904963b3fb723d92d0062ca3149a1f3ed4657bb8a888c8f9efadc4d7263a" gracePeriod=10 Nov 22 07:46:24 crc kubenswrapper[4858]: I1122 07:46:24.518274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerStarted","Data":"550c1b025f6a91d9f1eb90e387d55352b53fc4e2d4aa355371fdd228373dac57"} Nov 22 07:46:24 crc kubenswrapper[4858]: I1122 07:46:24.753061 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Nov 22 07:46:25 crc kubenswrapper[4858]: I1122 07:46:25.532625 4858 generic.go:334] "Generic (PLEG): container finished" podID="2fce793a-2443-44d1-92ff-da5191de627e" containerID="7b51904963b3fb723d92d0062ca3149a1f3ed4657bb8a888c8f9efadc4d7263a" exitCode=0 Nov 22 07:46:25 crc kubenswrapper[4858]: I1122 07:46:25.532686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" event={"ID":"2fce793a-2443-44d1-92ff-da5191de627e","Type":"ContainerDied","Data":"7b51904963b3fb723d92d0062ca3149a1f3ed4657bb8a888c8f9efadc4d7263a"} Nov 22 07:46:25 crc kubenswrapper[4858]: I1122 07:46:25.705543 4858 scope.go:117] "RemoveContainer" containerID="e55158c45050927635aa9db9ab484583b8f882f7f462bbec9bbaa9eb32add1c5" Nov 22 07:46:25 crc kubenswrapper[4858]: I1122 07:46:25.785557 4858 scope.go:117] "RemoveContainer" containerID="8f97cf8d77768e552938342ad815f326771e5cbc6898755842e6ef708fc38e40" Nov 22 07:46:25 crc kubenswrapper[4858]: I1122 07:46:25.942539 4858 scope.go:117] "RemoveContainer" containerID="12f4a71e97591d1d56eaa1899d33619999b0a2be882cfab84c897a7b44f91342" Nov 22 07:46:25 crc kubenswrapper[4858]: I1122 07:46:25.949592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.038056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45558\" (UniqueName: \"kubernetes.io/projected/2fce793a-2443-44d1-92ff-da5191de627e-kube-api-access-45558\") pod \"2fce793a-2443-44d1-92ff-da5191de627e\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.038135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-config\") pod \"2fce793a-2443-44d1-92ff-da5191de627e\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.038387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-swift-storage-0\") pod \"2fce793a-2443-44d1-92ff-da5191de627e\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.038428 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-sb\") pod \"2fce793a-2443-44d1-92ff-da5191de627e\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.038537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-nb\") pod \"2fce793a-2443-44d1-92ff-da5191de627e\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.038591 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-svc\") pod \"2fce793a-2443-44d1-92ff-da5191de627e\" (UID: \"2fce793a-2443-44d1-92ff-da5191de627e\") " Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.065620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce793a-2443-44d1-92ff-da5191de627e-kube-api-access-45558" (OuterVolumeSpecName: "kube-api-access-45558") pod "2fce793a-2443-44d1-92ff-da5191de627e" (UID: "2fce793a-2443-44d1-92ff-da5191de627e"). InnerVolumeSpecName "kube-api-access-45558". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.107762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2fce793a-2443-44d1-92ff-da5191de627e" (UID: "2fce793a-2443-44d1-92ff-da5191de627e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.115236 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2fce793a-2443-44d1-92ff-da5191de627e" (UID: "2fce793a-2443-44d1-92ff-da5191de627e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.118106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2fce793a-2443-44d1-92ff-da5191de627e" (UID: "2fce793a-2443-44d1-92ff-da5191de627e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.119909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-config" (OuterVolumeSpecName: "config") pod "2fce793a-2443-44d1-92ff-da5191de627e" (UID: "2fce793a-2443-44d1-92ff-da5191de627e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.120491 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2fce793a-2443-44d1-92ff-da5191de627e" (UID: "2fce793a-2443-44d1-92ff-da5191de627e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.141017 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.141067 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.141081 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45558\" (UniqueName: \"kubernetes.io/projected/2fce793a-2443-44d1-92ff-da5191de627e-kube-api-access-45558\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.141097 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.141108 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.141119 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fce793a-2443-44d1-92ff-da5191de627e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.262799 4858 scope.go:117] "RemoveContainer" containerID="1ec87502d31cc41b8659517a5a9a1782871638ffd5e0d536e73cd11e489d72d2" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.549157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" event={"ID":"2fce793a-2443-44d1-92ff-da5191de627e","Type":"ContainerDied","Data":"dc11ec8c0c14a494db8cc8dd1d31f504c3996bd31b1d98b77756be289493f8ca"} Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.549915 4858 scope.go:117] "RemoveContainer" containerID="7b51904963b3fb723d92d0062ca3149a1f3ed4657bb8a888c8f9efadc4d7263a" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.549227 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-655dc495c7-fxvzj" Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.587882 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-655dc495c7-fxvzj"] Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.595940 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-655dc495c7-fxvzj"] Nov 22 07:46:26 crc kubenswrapper[4858]: I1122 07:46:26.829234 4858 scope.go:117] "RemoveContainer" containerID="21af0fbe578e801e40dfde3b6da14838bf88faca14be620f98094fdce9858804" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.072779 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.558806 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce793a-2443-44d1-92ff-da5191de627e" path="/var/lib/kubelet/pods/2fce793a-2443-44d1-92ff-da5191de627e/volumes" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.930848 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gdhvl"] Nov 22 07:46:27 crc kubenswrapper[4858]: E1122 07:46:27.931647 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="init" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.931740 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="init" Nov 22 07:46:27 crc kubenswrapper[4858]: E1122 07:46:27.931806 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="dnsmasq-dns" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.931858 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="dnsmasq-dns" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.932105 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce793a-2443-44d1-92ff-da5191de627e" containerName="dnsmasq-dns" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.932965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.938077 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.938226 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 07:46:27 crc kubenswrapper[4858]: I1122 07:46:27.944733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gdhvl"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.084720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-scripts\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.085152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7qg\" (UniqueName: \"kubernetes.io/projected/be60fe14-f226-4d4e-a855-47991607fd04-kube-api-access-gb7qg\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.085226 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-config-data\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.085313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.187378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-scripts\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.187471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7qg\" (UniqueName: \"kubernetes.io/projected/be60fe14-f226-4d4e-a855-47991607fd04-kube-api-access-gb7qg\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.187546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-config-data\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.187635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.194658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.197053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-config-data\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.225921 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-scripts\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.253934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7qg\" (UniqueName: \"kubernetes.io/projected/be60fe14-f226-4d4e-a855-47991607fd04-kube-api-access-gb7qg\") pod \"nova-cell0-cell-mapping-gdhvl\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.256675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.287730 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.292631 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.307570 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.328442 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.330252 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.350727 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.351154 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.372370 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc10be37-2231-425b-a45e-a7f0ed2c5f58-logs\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-config-data\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ttv\" (UniqueName: \"kubernetes.io/projected/4dba84fa-2684-411e-af7b-a5c3b5adad74-kube-api-access-59ttv\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba84fa-2684-411e-af7b-a5c3b5adad74-logs\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-config-data\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.391415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76zp\" (UniqueName: \"kubernetes.io/projected/bc10be37-2231-425b-a45e-a7f0ed2c5f58-kube-api-access-j76zp\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495746 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ttv\" (UniqueName: \"kubernetes.io/projected/4dba84fa-2684-411e-af7b-a5c3b5adad74-kube-api-access-59ttv\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-config-data\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba84fa-2684-411e-af7b-a5c3b5adad74-logs\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-config-data\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.495963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j76zp\" (UniqueName: \"kubernetes.io/projected/bc10be37-2231-425b-a45e-a7f0ed2c5f58-kube-api-access-j76zp\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.496057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc10be37-2231-425b-a45e-a7f0ed2c5f58-logs\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.497261 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba84fa-2684-411e-af7b-a5c3b5adad74-logs\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.497304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc10be37-2231-425b-a45e-a7f0ed2c5f58-logs\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.523385 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-config-data\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.525196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.545216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-config-data\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.557946 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.581248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ttv\" (UniqueName: \"kubernetes.io/projected/4dba84fa-2684-411e-af7b-a5c3b5adad74-kube-api-access-59ttv\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.583376 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.596610 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.611521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.611624 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-zvjrj"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.613672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.695184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j76zp\" (UniqueName: \"kubernetes.io/projected/bc10be37-2231-425b-a45e-a7f0ed2c5f58-kube-api-access-j76zp\") pod \"nova-metadata-0\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.699550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerStarted","Data":"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154"} Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736648 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fw7\" (UniqueName: \"kubernetes.io/projected/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-kube-api-access-p5fw7\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5wd\" (UniqueName: \"kubernetes.io/projected/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-kube-api-access-xh5wd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736886 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-config\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.736924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.766106 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.774082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.851771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.851893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.851938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.852079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fw7\" (UniqueName: \"kubernetes.io/projected/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-kube-api-access-p5fw7\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.852104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5wd\" (UniqueName: \"kubernetes.io/projected/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-kube-api-access-xh5wd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.852187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.852227 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.852276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-config\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.852340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.870424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.878474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.885482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-config\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.885927 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.887741 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.922336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.891485 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-zvjrj"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.888479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.948562 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.950529 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.958649 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5wd\" (UniqueName: \"kubernetes.io/projected/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-kube-api-access-xh5wd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.959426 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.966122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fw7\" (UniqueName: \"kubernetes.io/projected/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-kube-api-access-p5fw7\") pod \"dnsmasq-dns-5dd7c4987f-zvjrj\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:28 crc kubenswrapper[4858]: I1122 07:46:28.990947 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.035252 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.063537 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.065444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-config-data\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.065704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7n2x\" (UniqueName: \"kubernetes.io/projected/c9eeefb0-28fa-4025-b31c-dd009f3921e1-kube-api-access-h7n2x\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.169027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.178788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-config-data\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.178941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7n2x\" (UniqueName: \"kubernetes.io/projected/c9eeefb0-28fa-4025-b31c-dd009f3921e1-kube-api-access-h7n2x\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.199951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-config-data\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.211995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.219993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7n2x\" (UniqueName: \"kubernetes.io/projected/c9eeefb0-28fa-4025-b31c-dd009f3921e1-kube-api-access-h7n2x\") pod \"nova-scheduler-0\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.222255 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.270094 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.369417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.422648 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gdhvl"] Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.668951 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:29 crc kubenswrapper[4858]: W1122 07:46:29.700420 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc10be37_2231_425b_a45e_a7f0ed2c5f58.slice/crio-28b0c8e6db960c710b5a55c3b887891914b471dc49cc92b1d08b92b2d67a1605 WatchSource:0}: Error finding container 28b0c8e6db960c710b5a55c3b887891914b471dc49cc92b1d08b92b2d67a1605: Status 404 returned error can't find the container with id 28b0c8e6db960c710b5a55c3b887891914b471dc49cc92b1d08b92b2d67a1605 Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.724463 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gdhvl" event={"ID":"be60fe14-f226-4d4e-a855-47991607fd04","Type":"ContainerStarted","Data":"1e21beef9c559ed7da10ee4fb8fddcc136298158c9853a7badc5604775005ac0"} Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.841193 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bwlrp"] Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.844333 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.850588 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.851514 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.855135 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bwlrp"] Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.902043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2hls\" (UniqueName: \"kubernetes.io/projected/b100894f-375d-4d4f-9bfa-7c87e4db058d-kube-api-access-j2hls\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.902228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-scripts\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.902349 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.902423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-config-data\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:29 crc kubenswrapper[4858]: I1122 07:46:29.924569 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.007088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-scripts\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.007572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.007635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-config-data\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.007721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2hls\" (UniqueName: \"kubernetes.io/projected/b100894f-375d-4d4f-9bfa-7c87e4db058d-kube-api-access-j2hls\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.019386 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-scripts\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.019974 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-config-data\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.020334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.046353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2hls\" (UniqueName: \"kubernetes.io/projected/b100894f-375d-4d4f-9bfa-7c87e4db058d-kube-api-access-j2hls\") pod \"nova-cell1-conductor-db-sync-bwlrp\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.180419 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.191672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.223473 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-zvjrj"] Nov 22 07:46:30 crc kubenswrapper[4858]: I1122 07:46:30.432855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:46:30 crc kubenswrapper[4858]: W1122 07:46:30.478493 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9eeefb0_28fa_4025_b31c_dd009f3921e1.slice/crio-97c80a4c4a84bcb12c0765629e61d03ea918dab22c9d286526dea8c20230022d WatchSource:0}: Error finding container 97c80a4c4a84bcb12c0765629e61d03ea918dab22c9d286526dea8c20230022d: Status 404 returned error can't find the container with id 97c80a4c4a84bcb12c0765629e61d03ea918dab22c9d286526dea8c20230022d Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.742799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dba84fa-2684-411e-af7b-a5c3b5adad74","Type":"ContainerStarted","Data":"2547d5161e20e416b6057030413dae3eacb13bcdb06b4c70da52accb90c16de6"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.748569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc10be37-2231-425b-a45e-a7f0ed2c5f58","Type":"ContainerStarted","Data":"28b0c8e6db960c710b5a55c3b887891914b471dc49cc92b1d08b92b2d67a1605"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.752383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerStarted","Data":"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.754123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9eeefb0-28fa-4025-b31c-dd009f3921e1","Type":"ContainerStarted","Data":"97c80a4c4a84bcb12c0765629e61d03ea918dab22c9d286526dea8c20230022d"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.757815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" event={"ID":"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1","Type":"ContainerStarted","Data":"fef2dc62a43da635cc7f550b7dc92a23423e0bee9acca763c25172d56d4b3aee"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.760365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4","Type":"ContainerStarted","Data":"b3d4c706c1be6eacd697ff96df4e94108967553de643c8e40a430f96c4ad931a"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.765047 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gdhvl" event={"ID":"be60fe14-f226-4d4e-a855-47991607fd04","Type":"ContainerStarted","Data":"16930abb64b29909bb858a278fc5b86a9cc7607ab57cf00aae8bb400015451f7"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.800701 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gdhvl" podStartSLOduration=3.800663147 podStartE2EDuration="3.800663147s" podCreationTimestamp="2025-11-22 07:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:30.790002324 +0000 UTC m=+2152.631425350" watchObservedRunningTime="2025-11-22 07:46:30.800663147 +0000 UTC m=+2152.642086183" Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:30.962570 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bwlrp"] Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:31.780033 4858 generic.go:334] "Generic (PLEG): container finished" podID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerID="2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85" exitCode=0 Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:31.780134 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" event={"ID":"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1","Type":"ContainerDied","Data":"2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:31.783434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" event={"ID":"b100894f-375d-4d4f-9bfa-7c87e4db058d","Type":"ContainerStarted","Data":"e08b7a1f8e2e8f5bdd733d2f70df309fb24c38853d5d408bf801b16aee9f17da"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:31.783504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" event={"ID":"b100894f-375d-4d4f-9bfa-7c87e4db058d","Type":"ContainerStarted","Data":"af6d3de4e685072287796e93c3dcf30402423f9dd4b9fb96d0d39fa3b7a0699a"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:31.837870 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" podStartSLOduration=2.837839359 podStartE2EDuration="2.837839359s" podCreationTimestamp="2025-11-22 07:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:31.83071162 +0000 UTC m=+2153.672134646" watchObservedRunningTime="2025-11-22 07:46:31.837839359 +0000 UTC m=+2153.679262355" Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:32.831026 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:32.832627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerStarted","Data":"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:32.840638 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:32.850631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" event={"ID":"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1","Type":"ContainerStarted","Data":"5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5"} Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:33.869414 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:33 crc kubenswrapper[4858]: I1122 07:46:33.940506 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" podStartSLOduration=5.940466832 podStartE2EDuration="5.940466832s" podCreationTimestamp="2025-11-22 07:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:33.901118366 +0000 UTC m=+2155.742541372" watchObservedRunningTime="2025-11-22 07:46:33.940466832 +0000 UTC m=+2155.781889838" Nov 22 07:46:39 crc kubenswrapper[4858]: I1122 07:46:39.273582 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:46:39 crc kubenswrapper[4858]: I1122 07:46:39.370461 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-j82sw"] Nov 22 07:46:39 crc kubenswrapper[4858]: I1122 07:46:39.371094 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-797bbc649-j82sw" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="dnsmasq-dns" containerID="cri-o://1d42cfb1f43238fcf3076ebf8b499aa6588df737fc4c961847d4fa0c324ab0d3" gracePeriod=10 Nov 22 07:46:39 crc kubenswrapper[4858]: I1122 07:46:39.956421 4858 generic.go:334] "Generic (PLEG): container finished" podID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerID="1d42cfb1f43238fcf3076ebf8b499aa6588df737fc4c961847d4fa0c324ab0d3" exitCode=0 Nov 22 07:46:39 crc kubenswrapper[4858]: I1122 07:46:39.956502 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-j82sw" event={"ID":"4dd2e516-84ae-41bd-9fdb-16aa2040356b","Type":"ContainerDied","Data":"1d42cfb1f43238fcf3076ebf8b499aa6588df737fc4c961847d4fa0c324ab0d3"} Nov 22 07:46:43 crc kubenswrapper[4858]: I1122 07:46:43.985080 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.002832 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.009775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-j82sw" event={"ID":"4dd2e516-84ae-41bd-9fdb-16aa2040356b","Type":"ContainerDied","Data":"92976e7f09f5cd61ed4baba5ad7bdc35364991c9141cef6c9572c96e234da774"} Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.009847 4858 scope.go:117] "RemoveContainer" containerID="1d42cfb1f43238fcf3076ebf8b499aa6588df737fc4c961847d4fa0c324ab0d3" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.010039 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-j82sw" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.015843 4858 generic.go:334] "Generic (PLEG): container finished" podID="be60fe14-f226-4d4e-a855-47991607fd04" containerID="16930abb64b29909bb858a278fc5b86a9cc7607ab57cf00aae8bb400015451f7" exitCode=0 Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.015907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gdhvl" event={"ID":"be60fe14-f226-4d4e-a855-47991607fd04","Type":"ContainerDied","Data":"16930abb64b29909bb858a278fc5b86a9cc7607ab57cf00aae8bb400015451f7"} Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.084483 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-sb\") pod \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.084649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-svc\") pod \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.084771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9zf\" (UniqueName: \"kubernetes.io/projected/4dd2e516-84ae-41bd-9fdb-16aa2040356b-kube-api-access-cf9zf\") pod \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.084794 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-nb\") pod \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.085975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-config\") pod \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.086076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-swift-storage-0\") pod \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\" (UID: \"4dd2e516-84ae-41bd-9fdb-16aa2040356b\") " Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.094060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd2e516-84ae-41bd-9fdb-16aa2040356b-kube-api-access-cf9zf" (OuterVolumeSpecName: "kube-api-access-cf9zf") pod "4dd2e516-84ae-41bd-9fdb-16aa2040356b" (UID: "4dd2e516-84ae-41bd-9fdb-16aa2040356b"). InnerVolumeSpecName "kube-api-access-cf9zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.146791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dd2e516-84ae-41bd-9fdb-16aa2040356b" (UID: "4dd2e516-84ae-41bd-9fdb-16aa2040356b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.146791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dd2e516-84ae-41bd-9fdb-16aa2040356b" (UID: "4dd2e516-84ae-41bd-9fdb-16aa2040356b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.151996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dd2e516-84ae-41bd-9fdb-16aa2040356b" (UID: "4dd2e516-84ae-41bd-9fdb-16aa2040356b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.162979 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-config" (OuterVolumeSpecName: "config") pod "4dd2e516-84ae-41bd-9fdb-16aa2040356b" (UID: "4dd2e516-84ae-41bd-9fdb-16aa2040356b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.174911 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4dd2e516-84ae-41bd-9fdb-16aa2040356b" (UID: "4dd2e516-84ae-41bd-9fdb-16aa2040356b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.189852 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.189915 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9zf\" (UniqueName: \"kubernetes.io/projected/4dd2e516-84ae-41bd-9fdb-16aa2040356b-kube-api-access-cf9zf\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.189933 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.189947 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.189961 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.189972 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dd2e516-84ae-41bd-9fdb-16aa2040356b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.312556 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.313018 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.313070 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.314093 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8caeb1a403d03d8804bfa487bf29539e11f1f2a11d9543c3192f5b713edaba0"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.314158 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://f8caeb1a403d03d8804bfa487bf29539e11f1f2a11d9543c3192f5b713edaba0" gracePeriod=600 Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.351850 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-j82sw"] Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.362576 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-j82sw"] Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.373079 4858 scope.go:117] "RemoveContainer" containerID="b8d598333ebe9b8f17d3277c98df424f1b884480ba90b3b71531396135fece8f" Nov 22 07:46:45 crc kubenswrapper[4858]: I1122 07:46:45.551569 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" path="/var/lib/kubelet/pods/4dd2e516-84ae-41bd-9fdb-16aa2040356b/volumes" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.045865 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="f8caeb1a403d03d8804bfa487bf29539e11f1f2a11d9543c3192f5b713edaba0" exitCode=0 Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.045940 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"f8caeb1a403d03d8804bfa487bf29539e11f1f2a11d9543c3192f5b713edaba0"} Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.355828 4858 scope.go:117] "RemoveContainer" containerID="36e67bd052fca4bcd9ead35faffda543a7c8763a8a53afd5eff3bab0873207cd" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.659242 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.737133 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-config-data\") pod \"be60fe14-f226-4d4e-a855-47991607fd04\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.737201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb7qg\" (UniqueName: \"kubernetes.io/projected/be60fe14-f226-4d4e-a855-47991607fd04-kube-api-access-gb7qg\") pod \"be60fe14-f226-4d4e-a855-47991607fd04\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.737320 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-scripts\") pod \"be60fe14-f226-4d4e-a855-47991607fd04\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.737443 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-combined-ca-bundle\") pod \"be60fe14-f226-4d4e-a855-47991607fd04\" (UID: \"be60fe14-f226-4d4e-a855-47991607fd04\") " Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.742614 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-scripts" (OuterVolumeSpecName: "scripts") pod "be60fe14-f226-4d4e-a855-47991607fd04" (UID: "be60fe14-f226-4d4e-a855-47991607fd04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.760658 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be60fe14-f226-4d4e-a855-47991607fd04-kube-api-access-gb7qg" (OuterVolumeSpecName: "kube-api-access-gb7qg") pod "be60fe14-f226-4d4e-a855-47991607fd04" (UID: "be60fe14-f226-4d4e-a855-47991607fd04"). InnerVolumeSpecName "kube-api-access-gb7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.840914 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb7qg\" (UniqueName: \"kubernetes.io/projected/be60fe14-f226-4d4e-a855-47991607fd04-kube-api-access-gb7qg\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.840964 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.873819 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.917679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-config-data" (OuterVolumeSpecName: "config-data") pod "be60fe14-f226-4d4e-a855-47991607fd04" (UID: "be60fe14-f226-4d4e-a855-47991607fd04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.924059 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be60fe14-f226-4d4e-a855-47991607fd04" (UID: "be60fe14-f226-4d4e-a855-47991607fd04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.943193 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.943262 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be60fe14-f226-4d4e-a855-47991607fd04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.980954 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cbb9bb55b-9l7r4"] Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.984672 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cbb9bb55b-9l7r4" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-api" containerID="cri-o://c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600" gracePeriod=30 Nov 22 07:46:46 crc kubenswrapper[4858]: I1122 07:46:46.985290 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cbb9bb55b-9l7r4" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-httpd" containerID="cri-o://17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb" gracePeriod=30 Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.123210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerStarted","Data":"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91"} Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.129963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4","Type":"ContainerStarted","Data":"2ffc1c6b78623d0fadfb08bceeb8b1b0fe581cc02d6f6632a9ea262c0c6faa8e"} Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.130177 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2ffc1c6b78623d0fadfb08bceeb8b1b0fe581cc02d6f6632a9ea262c0c6faa8e" gracePeriod=30 Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.133189 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gdhvl" event={"ID":"be60fe14-f226-4d4e-a855-47991607fd04","Type":"ContainerDied","Data":"1e21beef9c559ed7da10ee4fb8fddcc136298158c9853a7badc5604775005ac0"} Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.133241 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e21beef9c559ed7da10ee4fb8fddcc136298158c9853a7badc5604775005ac0" Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.133282 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gdhvl" Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.139057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dba84fa-2684-411e-af7b-a5c3b5adad74","Type":"ContainerStarted","Data":"4b88761e37487a6d85a617d1591308cf80bfeea9d7cfceb8a5a7139710e34be5"} Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.146611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc10be37-2231-425b-a45e-a7f0ed2c5f58","Type":"ContainerStarted","Data":"a1c40acf7e642fb6ce1eb4da85ffa352a187234237fb41b8613af8e58f87df7d"} Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.168994 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.762941779 podStartE2EDuration="19.168962288s" podCreationTimestamp="2025-11-22 07:46:28 +0000 UTC" firstStartedPulling="2025-11-22 07:46:30.201783728 +0000 UTC m=+2152.043206744" lastFinishedPulling="2025-11-22 07:46:45.607804247 +0000 UTC m=+2167.449227253" observedRunningTime="2025-11-22 07:46:47.156806937 +0000 UTC m=+2168.998229943" watchObservedRunningTime="2025-11-22 07:46:47.168962288 +0000 UTC m=+2169.010385304" Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.294571 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:47 crc kubenswrapper[4858]: I1122 07:46:47.321744 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.161715 4858 generic.go:334] "Generic (PLEG): container finished" podID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerID="17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb" exitCode=0 Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.161803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbb9bb55b-9l7r4" event={"ID":"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e","Type":"ContainerDied","Data":"17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb"} Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.164589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dba84fa-2684-411e-af7b-a5c3b5adad74","Type":"ContainerStarted","Data":"135fe97800233d2a553275349170b662c741cb8b9a33db57fd0558b4c9727c93"} Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.164706 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-log" containerID="cri-o://4b88761e37487a6d85a617d1591308cf80bfeea9d7cfceb8a5a7139710e34be5" gracePeriod=30 Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.164728 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-api" containerID="cri-o://135fe97800233d2a553275349170b662c741cb8b9a33db57fd0558b4c9727c93" gracePeriod=30 Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.175162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc10be37-2231-425b-a45e-a7f0ed2c5f58","Type":"ContainerStarted","Data":"8ab5680251d896292a6194dcf55dc596bebce98d1e92ec7177f1f4a6faecef33"} Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.175302 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-metadata" containerID="cri-o://8ab5680251d896292a6194dcf55dc596bebce98d1e92ec7177f1f4a6faecef33" gracePeriod=30 Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.175296 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-log" containerID="cri-o://a1c40acf7e642fb6ce1eb4da85ffa352a187234237fb41b8613af8e58f87df7d" gracePeriod=30 Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.186606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9eeefb0-28fa-4025-b31c-dd009f3921e1","Type":"ContainerStarted","Data":"847c3a6df4435d370e874a25826de76c5737e8e8b4a6447f6caf294fcf2286a4"} Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.194569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a"} Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.194786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.203938 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.786624401 podStartE2EDuration="20.203905338s" podCreationTimestamp="2025-11-22 07:46:28 +0000 UTC" firstStartedPulling="2025-11-22 07:46:29.940259713 +0000 UTC m=+2151.781682719" lastFinishedPulling="2025-11-22 07:46:46.35754065 +0000 UTC m=+2168.198963656" observedRunningTime="2025-11-22 07:46:48.199030101 +0000 UTC m=+2170.040453117" watchObservedRunningTime="2025-11-22 07:46:48.203905338 +0000 UTC m=+2170.045328344" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.240127 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.491476619 podStartE2EDuration="26.240099603s" podCreationTimestamp="2025-11-22 07:46:22 +0000 UTC" firstStartedPulling="2025-11-22 07:46:23.60994338 +0000 UTC m=+2145.451366386" lastFinishedPulling="2025-11-22 07:46:46.358566364 +0000 UTC m=+2168.199989370" observedRunningTime="2025-11-22 07:46:48.23629016 +0000 UTC m=+2170.077713176" watchObservedRunningTime="2025-11-22 07:46:48.240099603 +0000 UTC m=+2170.081522609" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.284077 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.638570337 podStartE2EDuration="20.284053847s" podCreationTimestamp="2025-11-22 07:46:28 +0000 UTC" firstStartedPulling="2025-11-22 07:46:29.713039102 +0000 UTC m=+2151.554462108" lastFinishedPulling="2025-11-22 07:46:46.358522612 +0000 UTC m=+2168.199945618" observedRunningTime="2025-11-22 07:46:48.280562925 +0000 UTC m=+2170.121985931" watchObservedRunningTime="2025-11-22 07:46:48.284053847 +0000 UTC m=+2170.125476843" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.308869 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.213485881 podStartE2EDuration="20.308832254s" podCreationTimestamp="2025-11-22 07:46:28 +0000 UTC" firstStartedPulling="2025-11-22 07:46:30.512527086 +0000 UTC m=+2152.353950092" lastFinishedPulling="2025-11-22 07:46:45.607873459 +0000 UTC m=+2167.449296465" observedRunningTime="2025-11-22 07:46:48.306802549 +0000 UTC m=+2170.148225555" watchObservedRunningTime="2025-11-22 07:46:48.308832254 +0000 UTC m=+2170.150255290" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.775642 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.775957 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:46:48 crc kubenswrapper[4858]: I1122 07:46:48.846550 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-797bbc649-j82sw" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.180:5353: i/o timeout" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.215699 4858 generic.go:334] "Generic (PLEG): container finished" podID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerID="135fe97800233d2a553275349170b662c741cb8b9a33db57fd0558b4c9727c93" exitCode=0 Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.216291 4858 generic.go:334] "Generic (PLEG): container finished" podID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerID="4b88761e37487a6d85a617d1591308cf80bfeea9d7cfceb8a5a7139710e34be5" exitCode=143 Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.217809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dba84fa-2684-411e-af7b-a5c3b5adad74","Type":"ContainerDied","Data":"135fe97800233d2a553275349170b662c741cb8b9a33db57fd0558b4c9727c93"} Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.217885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dba84fa-2684-411e-af7b-a5c3b5adad74","Type":"ContainerDied","Data":"4b88761e37487a6d85a617d1591308cf80bfeea9d7cfceb8a5a7139710e34be5"} Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.223416 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.227843 4858 generic.go:334] "Generic (PLEG): container finished" podID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerID="8ab5680251d896292a6194dcf55dc596bebce98d1e92ec7177f1f4a6faecef33" exitCode=0 Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.227880 4858 generic.go:334] "Generic (PLEG): container finished" podID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerID="a1c40acf7e642fb6ce1eb4da85ffa352a187234237fb41b8613af8e58f87df7d" exitCode=143 Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.229355 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc10be37-2231-425b-a45e-a7f0ed2c5f58","Type":"ContainerDied","Data":"8ab5680251d896292a6194dcf55dc596bebce98d1e92ec7177f1f4a6faecef33"} Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.229402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc10be37-2231-425b-a45e-a7f0ed2c5f58","Type":"ContainerDied","Data":"a1c40acf7e642fb6ce1eb4da85ffa352a187234237fb41b8613af8e58f87df7d"} Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.229641 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c9eeefb0-28fa-4025-b31c-dd009f3921e1" containerName="nova-scheduler-scheduler" containerID="cri-o://847c3a6df4435d370e874a25826de76c5737e8e8b4a6447f6caf294fcf2286a4" gracePeriod=30 Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.373521 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.526274 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.539661 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643380 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba84fa-2684-411e-af7b-a5c3b5adad74-logs\") pod \"4dba84fa-2684-411e-af7b-a5c3b5adad74\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59ttv\" (UniqueName: \"kubernetes.io/projected/4dba84fa-2684-411e-af7b-a5c3b5adad74-kube-api-access-59ttv\") pod \"4dba84fa-2684-411e-af7b-a5c3b5adad74\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643572 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-combined-ca-bundle\") pod \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643664 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-config-data\") pod \"4dba84fa-2684-411e-af7b-a5c3b5adad74\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643742 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j76zp\" (UniqueName: \"kubernetes.io/projected/bc10be37-2231-425b-a45e-a7f0ed2c5f58-kube-api-access-j76zp\") pod \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-config-data\") pod \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643835 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc10be37-2231-425b-a45e-a7f0ed2c5f58-logs\") pod \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\" (UID: \"bc10be37-2231-425b-a45e-a7f0ed2c5f58\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.643986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-combined-ca-bundle\") pod \"4dba84fa-2684-411e-af7b-a5c3b5adad74\" (UID: \"4dba84fa-2684-411e-af7b-a5c3b5adad74\") " Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.644180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dba84fa-2684-411e-af7b-a5c3b5adad74-logs" (OuterVolumeSpecName: "logs") pod "4dba84fa-2684-411e-af7b-a5c3b5adad74" (UID: "4dba84fa-2684-411e-af7b-a5c3b5adad74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.644675 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba84fa-2684-411e-af7b-a5c3b5adad74-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.648602 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc10be37-2231-425b-a45e-a7f0ed2c5f58-logs" (OuterVolumeSpecName: "logs") pod "bc10be37-2231-425b-a45e-a7f0ed2c5f58" (UID: "bc10be37-2231-425b-a45e-a7f0ed2c5f58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.662105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dba84fa-2684-411e-af7b-a5c3b5adad74-kube-api-access-59ttv" (OuterVolumeSpecName: "kube-api-access-59ttv") pod "4dba84fa-2684-411e-af7b-a5c3b5adad74" (UID: "4dba84fa-2684-411e-af7b-a5c3b5adad74"). InnerVolumeSpecName "kube-api-access-59ttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.665040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc10be37-2231-425b-a45e-a7f0ed2c5f58-kube-api-access-j76zp" (OuterVolumeSpecName: "kube-api-access-j76zp") pod "bc10be37-2231-425b-a45e-a7f0ed2c5f58" (UID: "bc10be37-2231-425b-a45e-a7f0ed2c5f58"). InnerVolumeSpecName "kube-api-access-j76zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.695820 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-config-data" (OuterVolumeSpecName: "config-data") pod "4dba84fa-2684-411e-af7b-a5c3b5adad74" (UID: "4dba84fa-2684-411e-af7b-a5c3b5adad74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.697804 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-config-data" (OuterVolumeSpecName: "config-data") pod "bc10be37-2231-425b-a45e-a7f0ed2c5f58" (UID: "bc10be37-2231-425b-a45e-a7f0ed2c5f58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.701620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc10be37-2231-425b-a45e-a7f0ed2c5f58" (UID: "bc10be37-2231-425b-a45e-a7f0ed2c5f58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.717103 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dba84fa-2684-411e-af7b-a5c3b5adad74" (UID: "4dba84fa-2684-411e-af7b-a5c3b5adad74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747220 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59ttv\" (UniqueName: \"kubernetes.io/projected/4dba84fa-2684-411e-af7b-a5c3b5adad74-kube-api-access-59ttv\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747310 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747341 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747357 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j76zp\" (UniqueName: \"kubernetes.io/projected/bc10be37-2231-425b-a45e-a7f0ed2c5f58-kube-api-access-j76zp\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747373 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc10be37-2231-425b-a45e-a7f0ed2c5f58-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747385 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc10be37-2231-425b-a45e-a7f0ed2c5f58-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:49 crc kubenswrapper[4858]: I1122 07:46:49.747396 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba84fa-2684-411e-af7b-a5c3b5adad74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.242470 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.242511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4dba84fa-2684-411e-af7b-a5c3b5adad74","Type":"ContainerDied","Data":"2547d5161e20e416b6057030413dae3eacb13bcdb06b4c70da52accb90c16de6"} Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.242607 4858 scope.go:117] "RemoveContainer" containerID="135fe97800233d2a553275349170b662c741cb8b9a33db57fd0558b4c9727c93" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.246913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc10be37-2231-425b-a45e-a7f0ed2c5f58","Type":"ContainerDied","Data":"28b0c8e6db960c710b5a55c3b887891914b471dc49cc92b1d08b92b2d67a1605"} Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.247006 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.301540 4858 scope.go:117] "RemoveContainer" containerID="4b88761e37487a6d85a617d1591308cf80bfeea9d7cfceb8a5a7139710e34be5" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.304713 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.334485 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.367995 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.401111 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.401953 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-metadata" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.401987 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-metadata" Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.402001 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="dnsmasq-dns" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402010 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="dnsmasq-dns" Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.402032 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-log" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402041 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-log" Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.402055 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="init" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402063 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="init" Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.402082 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-api" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402092 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-api" Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.402126 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be60fe14-f226-4d4e-a855-47991607fd04" containerName="nova-manage" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402136 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="be60fe14-f226-4d4e-a855-47991607fd04" containerName="nova-manage" Nov 22 07:46:50 crc kubenswrapper[4858]: E1122 07:46:50.402154 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-log" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402164 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-log" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402751 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="be60fe14-f226-4d4e-a855-47991607fd04" containerName="nova-manage" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402790 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-api" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402805 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd2e516-84ae-41bd-9fdb-16aa2040356b" containerName="dnsmasq-dns" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402815 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-log" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402822 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" containerName="nova-metadata-metadata" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.402833 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" containerName="nova-api-log" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.404022 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.406766 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.424674 4858 scope.go:117] "RemoveContainer" containerID="8ab5680251d896292a6194dcf55dc596bebce98d1e92ec7177f1f4a6faecef33" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.427046 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.449567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.451553 4858 scope.go:117] "RemoveContainer" containerID="a1c40acf7e642fb6ce1eb4da85ffa352a187234237fb41b8613af8e58f87df7d" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.461588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e2a207-d641-40d4-93a4-73bbacf1034f-logs\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.461736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-config-data\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.461852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xv5q\" (UniqueName: \"kubernetes.io/projected/d0e2a207-d641-40d4-93a4-73bbacf1034f-kube-api-access-5xv5q\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.461892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.464660 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.467032 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.471067 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.471268 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.473489 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e2a207-d641-40d4-93a4-73bbacf1034f-logs\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563400 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-config-data\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563490 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36700ca-f760-4ca3-9426-246466f122a6-logs\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-config-data\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563542 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt58d\" (UniqueName: \"kubernetes.io/projected/e36700ca-f760-4ca3-9426-246466f122a6-kube-api-access-pt58d\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xv5q\" (UniqueName: \"kubernetes.io/projected/d0e2a207-d641-40d4-93a4-73bbacf1034f-kube-api-access-5xv5q\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.563620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.565159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e2a207-d641-40d4-93a4-73bbacf1034f-logs\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.569484 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.573406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-config-data\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.584842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xv5q\" (UniqueName: \"kubernetes.io/projected/d0e2a207-d641-40d4-93a4-73bbacf1034f-kube-api-access-5xv5q\") pod \"nova-api-0\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.665577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.665643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-config-data\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.665744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36700ca-f760-4ca3-9426-246466f122a6-logs\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.665778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt58d\" (UniqueName: \"kubernetes.io/projected/e36700ca-f760-4ca3-9426-246466f122a6-kube-api-access-pt58d\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.665864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.668189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36700ca-f760-4ca3-9426-246466f122a6-logs\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.672958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.673250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-config-data\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.673672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.690852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt58d\" (UniqueName: \"kubernetes.io/projected/e36700ca-f760-4ca3-9426-246466f122a6-kube-api-access-pt58d\") pod \"nova-metadata-0\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " pod="openstack/nova-metadata-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.745148 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:46:50 crc kubenswrapper[4858]: I1122 07:46:50.783536 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:46:51 crc kubenswrapper[4858]: I1122 07:46:51.277919 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:46:51 crc kubenswrapper[4858]: I1122 07:46:51.413422 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:46:51 crc kubenswrapper[4858]: I1122 07:46:51.555984 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dba84fa-2684-411e-af7b-a5c3b5adad74" path="/var/lib/kubelet/pods/4dba84fa-2684-411e-af7b-a5c3b5adad74/volumes" Nov 22 07:46:51 crc kubenswrapper[4858]: I1122 07:46:51.556947 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc10be37-2231-425b-a45e-a7f0ed2c5f58" path="/var/lib/kubelet/pods/bc10be37-2231-425b-a45e-a7f0ed2c5f58/volumes" Nov 22 07:46:52 crc kubenswrapper[4858]: I1122 07:46:52.299363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e36700ca-f760-4ca3-9426-246466f122a6","Type":"ContainerStarted","Data":"2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9"} Nov 22 07:46:52 crc kubenswrapper[4858]: I1122 07:46:52.299966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e36700ca-f760-4ca3-9426-246466f122a6","Type":"ContainerStarted","Data":"17f5463bfac4ad8ff9d2da6c4b7ad20898094d6d198c3fd4c8ce70e4a28422e3"} Nov 22 07:46:52 crc kubenswrapper[4858]: I1122 07:46:52.320545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0e2a207-d641-40d4-93a4-73bbacf1034f","Type":"ContainerStarted","Data":"bc8a27eea352263f98fe25e76c1d957eeeb184f506f3aa6daf2e74580fb96d1b"} Nov 22 07:46:52 crc kubenswrapper[4858]: I1122 07:46:52.320606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0e2a207-d641-40d4-93a4-73bbacf1034f","Type":"ContainerStarted","Data":"796a6c9336b19b2f5cb038c450a4bcbd027c1073804e7268dadfb98813321432"} Nov 22 07:46:53 crc kubenswrapper[4858]: I1122 07:46:53.335431 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0e2a207-d641-40d4-93a4-73bbacf1034f","Type":"ContainerStarted","Data":"0fa5b0799c1ab37edcacaa7761400cfbd72a9c9bff76e4ad8ef5cba123f45786"} Nov 22 07:46:53 crc kubenswrapper[4858]: I1122 07:46:53.349370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e36700ca-f760-4ca3-9426-246466f122a6","Type":"ContainerStarted","Data":"d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee"} Nov 22 07:46:53 crc kubenswrapper[4858]: I1122 07:46:53.373514 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.373488463 podStartE2EDuration="3.373488463s" podCreationTimestamp="2025-11-22 07:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:53.366971893 +0000 UTC m=+2175.208394929" watchObservedRunningTime="2025-11-22 07:46:53.373488463 +0000 UTC m=+2175.214911469" Nov 22 07:46:55 crc kubenswrapper[4858]: I1122 07:46:55.784625 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:46:55 crc kubenswrapper[4858]: I1122 07:46:55.786052 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.101455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.133807 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.133778863 podStartE2EDuration="7.133778863s" podCreationTimestamp="2025-11-22 07:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:53.395062247 +0000 UTC m=+2175.236485283" watchObservedRunningTime="2025-11-22 07:46:57.133778863 +0000 UTC m=+2178.975201869" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.241003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqfw\" (UniqueName: \"kubernetes.io/projected/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-kube-api-access-vfqfw\") pod \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.241167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-ovndb-tls-certs\") pod \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.241203 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-config\") pod \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.241287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-combined-ca-bundle\") pod \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.241364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-httpd-config\") pod \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\" (UID: \"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e\") " Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.248016 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" (UID: "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.251878 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-kube-api-access-vfqfw" (OuterVolumeSpecName: "kube-api-access-vfqfw") pod "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" (UID: "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e"). InnerVolumeSpecName "kube-api-access-vfqfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.307644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" (UID: "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.314000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-config" (OuterVolumeSpecName: "config") pod "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" (UID: "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.322098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" (UID: "f3c6eac3-44d6-4a48-a1b7-71e98be7f70e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.344152 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfqfw\" (UniqueName: \"kubernetes.io/projected/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-kube-api-access-vfqfw\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.344195 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.344211 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.344222 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.344236 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.393703 4858 generic.go:334] "Generic (PLEG): container finished" podID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerID="c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600" exitCode=0 Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.393760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbb9bb55b-9l7r4" event={"ID":"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e","Type":"ContainerDied","Data":"c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600"} Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.393791 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbb9bb55b-9l7r4" event={"ID":"f3c6eac3-44d6-4a48-a1b7-71e98be7f70e","Type":"ContainerDied","Data":"e6c9257233dd531f85cf86c9cd05ffcf681d9dfc9deaf1d1516d07f0fc1e1cd7"} Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.393809 4858 scope.go:117] "RemoveContainer" containerID="17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.393806 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbb9bb55b-9l7r4" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.418292 4858 scope.go:117] "RemoveContainer" containerID="c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.438000 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cbb9bb55b-9l7r4"] Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.446960 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cbb9bb55b-9l7r4"] Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.504692 4858 scope.go:117] "RemoveContainer" containerID="17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb" Nov 22 07:46:57 crc kubenswrapper[4858]: E1122 07:46:57.505591 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb\": container with ID starting with 17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb not found: ID does not exist" containerID="17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.505651 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb"} err="failed to get container status \"17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb\": rpc error: code = NotFound desc = could not find container \"17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb\": container with ID starting with 17c876e51fa2d21880c2e49e0d2cc1e56a23475aa12cb0c75c4c0f76acadaffb not found: ID does not exist" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.505687 4858 scope.go:117] "RemoveContainer" containerID="c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600" Nov 22 07:46:57 crc kubenswrapper[4858]: E1122 07:46:57.506216 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600\": container with ID starting with c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600 not found: ID does not exist" containerID="c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.506346 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600"} err="failed to get container status \"c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600\": rpc error: code = NotFound desc = could not find container \"c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600\": container with ID starting with c6eaec51cc545caee85a4b702c24f19a141ab1b9626ae8b3304d34dbc231d600 not found: ID does not exist" Nov 22 07:46:57 crc kubenswrapper[4858]: I1122 07:46:57.549299 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" path="/var/lib/kubelet/pods/f3c6eac3-44d6-4a48-a1b7-71e98be7f70e/volumes" Nov 22 07:47:00 crc kubenswrapper[4858]: I1122 07:47:00.745821 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:47:00 crc kubenswrapper[4858]: I1122 07:47:00.746906 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:47:00 crc kubenswrapper[4858]: I1122 07:47:00.784344 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:47:00 crc kubenswrapper[4858]: I1122 07:47:00.784556 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:47:01 crc kubenswrapper[4858]: I1122 07:47:01.828719 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:47:01 crc kubenswrapper[4858]: I1122 07:47:01.847660 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:47:01 crc kubenswrapper[4858]: I1122 07:47:01.847780 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:47:01 crc kubenswrapper[4858]: I1122 07:47:01.847819 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.749545 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.750163 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.750619 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.750643 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.756868 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.757085 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.789849 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.789950 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.795496 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:47:10 crc kubenswrapper[4858]: I1122 07:47:10.796631 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.129973 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-kvhvp"] Nov 22 07:47:11 crc kubenswrapper[4858]: E1122 07:47:11.130604 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-api" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.130657 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-api" Nov 22 07:47:11 crc kubenswrapper[4858]: E1122 07:47:11.130696 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-httpd" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.130705 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-httpd" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.131026 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-httpd" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.131053 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c6eac3-44d6-4a48-a1b7-71e98be7f70e" containerName="neutron-api" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.133046 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.141918 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-kvhvp"] Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.240058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.240165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.240197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-config\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.240305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmgh\" (UniqueName: \"kubernetes.io/projected/56ada962-6646-4da6-987d-6e9e277ee8b2-kube-api-access-mnmgh\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.240761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.241069 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.345125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.345213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.345241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-config\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.345265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmgh\" (UniqueName: \"kubernetes.io/projected/56ada962-6646-4da6-987d-6e9e277ee8b2-kube-api-access-mnmgh\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.345295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.345352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.346339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.350085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-config\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.350991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.351227 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.355191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.402268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmgh\" (UniqueName: \"kubernetes.io/projected/56ada962-6646-4da6-987d-6e9e277ee8b2-kube-api-access-mnmgh\") pod \"dnsmasq-dns-5d7f54fb65-kvhvp\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.470152 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:11 crc kubenswrapper[4858]: I1122 07:47:11.827226 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-kvhvp"] Nov 22 07:47:12 crc kubenswrapper[4858]: I1122 07:47:12.503494 4858 generic.go:334] "Generic (PLEG): container finished" podID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerID="59af042d3acb4898c5619edff95f385338616bbd0b3ce774e93c98a8b4ce51c3" exitCode=0 Nov 22 07:47:12 crc kubenswrapper[4858]: I1122 07:47:12.503777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" event={"ID":"56ada962-6646-4da6-987d-6e9e277ee8b2","Type":"ContainerDied","Data":"59af042d3acb4898c5619edff95f385338616bbd0b3ce774e93c98a8b4ce51c3"} Nov 22 07:47:12 crc kubenswrapper[4858]: I1122 07:47:12.505100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" event={"ID":"56ada962-6646-4da6-987d-6e9e277ee8b2","Type":"ContainerStarted","Data":"cb8d8ac8061b42f488c86c16d0d86adb60ca6c2398fda29bfcf9285579ed00f7"} Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.520103 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" event={"ID":"56ada962-6646-4da6-987d-6e9e277ee8b2","Type":"ContainerStarted","Data":"d4513547d8a2ac717f8f1030a117cfe4b3acd8b155fe44df16b760b78b855132"} Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.520687 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.543061 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" podStartSLOduration=2.54303249 podStartE2EDuration="2.54303249s" podCreationTimestamp="2025-11-22 07:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:13.540472257 +0000 UTC m=+2195.381895293" watchObservedRunningTime="2025-11-22 07:47:13.54303249 +0000 UTC m=+2195.384455496" Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.892965 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.893307 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-central-agent" containerID="cri-o://4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154" gracePeriod=30 Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.893465 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-notification-agent" containerID="cri-o://2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9" gracePeriod=30 Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.893494 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="sg-core" containerID="cri-o://1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da" gracePeriod=30 Nov 22 07:47:13 crc kubenswrapper[4858]: I1122 07:47:13.893520 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="proxy-httpd" containerID="cri-o://d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91" gracePeriod=30 Nov 22 07:47:14 crc kubenswrapper[4858]: I1122 07:47:14.003386 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": read tcp 10.217.0.2:36056->10.217.0.184:3000: read: connection reset by peer" Nov 22 07:47:14 crc kubenswrapper[4858]: I1122 07:47:14.534152 4858 generic.go:334] "Generic (PLEG): container finished" podID="baf87756-8721-4a37-a84d-4990d3a41a35" containerID="d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91" exitCode=0 Nov 22 07:47:14 crc kubenswrapper[4858]: I1122 07:47:14.534496 4858 generic.go:334] "Generic (PLEG): container finished" podID="baf87756-8721-4a37-a84d-4990d3a41a35" containerID="1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da" exitCode=2 Nov 22 07:47:14 crc kubenswrapper[4858]: I1122 07:47:14.534343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerDied","Data":"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91"} Nov 22 07:47:14 crc kubenswrapper[4858]: I1122 07:47:14.534586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerDied","Data":"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da"} Nov 22 07:47:14 crc kubenswrapper[4858]: E1122 07:47:14.642977 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaf87756_8721_4a37_a84d_4990d3a41a35.slice/crio-4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.130671 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.131215 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-log" containerID="cri-o://bc8a27eea352263f98fe25e76c1d957eeeb184f506f3aa6daf2e74580fb96d1b" gracePeriod=30 Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.131819 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-api" containerID="cri-o://0fa5b0799c1ab37edcacaa7761400cfbd72a9c9bff76e4ad8ef5cba123f45786" gracePeriod=30 Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.525298 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.551558 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerID="bc8a27eea352263f98fe25e76c1d957eeeb184f506f3aa6daf2e74580fb96d1b" exitCode=143 Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.575082 4858 generic.go:334] "Generic (PLEG): container finished" podID="baf87756-8721-4a37-a84d-4990d3a41a35" containerID="2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9" exitCode=0 Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.575136 4858 generic.go:334] "Generic (PLEG): container finished" podID="baf87756-8721-4a37-a84d-4990d3a41a35" containerID="4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154" exitCode=0 Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.575227 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.585755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0e2a207-d641-40d4-93a4-73bbacf1034f","Type":"ContainerDied","Data":"bc8a27eea352263f98fe25e76c1d957eeeb184f506f3aa6daf2e74580fb96d1b"} Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.585824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerDied","Data":"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9"} Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.585867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerDied","Data":"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154"} Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.585884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf87756-8721-4a37-a84d-4990d3a41a35","Type":"ContainerDied","Data":"550c1b025f6a91d9f1eb90e387d55352b53fc4e2d4aa355371fdd228373dac57"} Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.585906 4858 scope.go:117] "RemoveContainer" containerID="d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.618511 4858 scope.go:117] "RemoveContainer" containerID="1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.647120 4858 scope.go:117] "RemoveContainer" containerID="2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.669937 4858 scope.go:117] "RemoveContainer" containerID="4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.674445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-combined-ca-bundle\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.675574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-scripts\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.675669 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmvjq\" (UniqueName: \"kubernetes.io/projected/baf87756-8721-4a37-a84d-4990d3a41a35-kube-api-access-rmvjq\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.675733 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-sg-core-conf-yaml\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.675813 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-config-data\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.675838 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-run-httpd\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.675866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-log-httpd\") pod \"baf87756-8721-4a37-a84d-4990d3a41a35\" (UID: \"baf87756-8721-4a37-a84d-4990d3a41a35\") " Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.677797 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.678175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.682597 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-scripts" (OuterVolumeSpecName: "scripts") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.684584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf87756-8721-4a37-a84d-4990d3a41a35-kube-api-access-rmvjq" (OuterVolumeSpecName: "kube-api-access-rmvjq") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "kube-api-access-rmvjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.703723 4858 scope.go:117] "RemoveContainer" containerID="d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.704585 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91\": container with ID starting with d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91 not found: ID does not exist" containerID="d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.704650 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91"} err="failed to get container status \"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91\": rpc error: code = NotFound desc = could not find container \"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91\": container with ID starting with d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91 not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.704681 4858 scope.go:117] "RemoveContainer" containerID="1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.705423 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da\": container with ID starting with 1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da not found: ID does not exist" containerID="1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.705465 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da"} err="failed to get container status \"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da\": rpc error: code = NotFound desc = could not find container \"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da\": container with ID starting with 1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.705495 4858 scope.go:117] "RemoveContainer" containerID="2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.706076 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9\": container with ID starting with 2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9 not found: ID does not exist" containerID="2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.706095 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9"} err="failed to get container status \"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9\": rpc error: code = NotFound desc = could not find container \"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9\": container with ID starting with 2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9 not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.706119 4858 scope.go:117] "RemoveContainer" containerID="4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.706573 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154\": container with ID starting with 4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154 not found: ID does not exist" containerID="4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.706634 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154"} err="failed to get container status \"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154\": rpc error: code = NotFound desc = could not find container \"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154\": container with ID starting with 4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154 not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.706648 4858 scope.go:117] "RemoveContainer" containerID="d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.706912 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91"} err="failed to get container status \"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91\": rpc error: code = NotFound desc = could not find container \"d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91\": container with ID starting with d57a64add5295e8f2280fec6500d2014e6bc8db3465d1456c306752af5e72d91 not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.706928 4858 scope.go:117] "RemoveContainer" containerID="1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.707218 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da"} err="failed to get container status \"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da\": rpc error: code = NotFound desc = could not find container \"1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da\": container with ID starting with 1efe192f0a474a6f7e01f1257464443bfa593c7309aed98a340a3f8e1d5ac3da not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.707247 4858 scope.go:117] "RemoveContainer" containerID="2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.707883 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9"} err="failed to get container status \"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9\": rpc error: code = NotFound desc = could not find container \"2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9\": container with ID starting with 2e8994066e4299903c4f7f96d668ff18d3c2bddbcd205b7d302de513875b1ec9 not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.707983 4858 scope.go:117] "RemoveContainer" containerID="4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.708418 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154"} err="failed to get container status \"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154\": rpc error: code = NotFound desc = could not find container \"4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154\": container with ID starting with 4cab96981486aba33f22ec11983875599d6e5791c8136512df94baebb5ce9154 not found: ID does not exist" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.712807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.773730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.777971 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.778014 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.778025 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf87756-8721-4a37-a84d-4990d3a41a35-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.778036 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.778047 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.778058 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmvjq\" (UniqueName: \"kubernetes.io/projected/baf87756-8721-4a37-a84d-4990d3a41a35-kube-api-access-rmvjq\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.788635 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-config-data" (OuterVolumeSpecName: "config-data") pod "baf87756-8721-4a37-a84d-4990d3a41a35" (UID: "baf87756-8721-4a37-a84d-4990d3a41a35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.880058 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf87756-8721-4a37-a84d-4990d3a41a35-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.916143 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.928491 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.942917 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.943528 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-notification-agent" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943550 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-notification-agent" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.943568 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-central-agent" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943577 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-central-agent" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.943591 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="sg-core" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943598 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="sg-core" Nov 22 07:47:15 crc kubenswrapper[4858]: E1122 07:47:15.943615 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="proxy-httpd" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943622 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="proxy-httpd" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943823 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-notification-agent" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943839 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="ceilometer-central-agent" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943849 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="sg-core" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.943860 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" containerName="proxy-httpd" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.945783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.948466 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.949152 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:47:15 crc kubenswrapper[4858]: I1122 07:47:15.962754 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-run-httpd\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083590 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-log-httpd\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sxj\" (UniqueName: \"kubernetes.io/projected/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-kube-api-access-b9sxj\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083721 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-config-data\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.083781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-scripts\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-run-httpd\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-log-httpd\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9sxj\" (UniqueName: \"kubernetes.io/projected/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-kube-api-access-b9sxj\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-config-data\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.185978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-scripts\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.187464 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-log-httpd\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.187565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-run-httpd\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.192137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-scripts\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.194025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.194804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-config-data\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.205129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.219351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9sxj\" (UniqueName: \"kubernetes.io/projected/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-kube-api-access-b9sxj\") pod \"ceilometer-0\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.266133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.538926 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:16 crc kubenswrapper[4858]: I1122 07:47:16.774504 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:16 crc kubenswrapper[4858]: W1122 07:47:16.782005 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7e9c4e3_7783_4338_b0c6_fe9adfff04a8.slice/crio-1327329a5d9cb2680b17d972abd5c5a50aed89283b3f57779e1d9b5ce07b3f46 WatchSource:0}: Error finding container 1327329a5d9cb2680b17d972abd5c5a50aed89283b3f57779e1d9b5ce07b3f46: Status 404 returned error can't find the container with id 1327329a5d9cb2680b17d972abd5c5a50aed89283b3f57779e1d9b5ce07b3f46 Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.551195 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf87756-8721-4a37-a84d-4990d3a41a35" path="/var/lib/kubelet/pods/baf87756-8721-4a37-a84d-4990d3a41a35/volumes" Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.599140 4858 generic.go:334] "Generic (PLEG): container finished" podID="7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" containerID="2ffc1c6b78623d0fadfb08bceeb8b1b0fe581cc02d6f6632a9ea262c0c6faa8e" exitCode=137 Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.599235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4","Type":"ContainerDied","Data":"2ffc1c6b78623d0fadfb08bceeb8b1b0fe581cc02d6f6632a9ea262c0c6faa8e"} Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.601360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerStarted","Data":"1327329a5d9cb2680b17d972abd5c5a50aed89283b3f57779e1d9b5ce07b3f46"} Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.835882 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.924011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-config-data\") pod \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.924260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-combined-ca-bundle\") pod \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.924362 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh5wd\" (UniqueName: \"kubernetes.io/projected/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-kube-api-access-xh5wd\") pod \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\" (UID: \"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4\") " Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.929432 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-kube-api-access-xh5wd" (OuterVolumeSpecName: "kube-api-access-xh5wd") pod "7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" (UID: "7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4"). InnerVolumeSpecName "kube-api-access-xh5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.963485 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" (UID: "7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:17 crc kubenswrapper[4858]: I1122 07:47:17.968439 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-config-data" (OuterVolumeSpecName: "config-data") pod "7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" (UID: "7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.026635 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh5wd\" (UniqueName: \"kubernetes.io/projected/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-kube-api-access-xh5wd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.026686 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.026696 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.618512 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerID="0fa5b0799c1ab37edcacaa7761400cfbd72a9c9bff76e4ad8ef5cba123f45786" exitCode=0 Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.618619 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0e2a207-d641-40d4-93a4-73bbacf1034f","Type":"ContainerDied","Data":"0fa5b0799c1ab37edcacaa7761400cfbd72a9c9bff76e4ad8ef5cba123f45786"} Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.622578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4","Type":"ContainerDied","Data":"b3d4c706c1be6eacd697ff96df4e94108967553de643c8e40a430f96c4ad931a"} Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.622676 4858 scope.go:117] "RemoveContainer" containerID="2ffc1c6b78623d0fadfb08bceeb8b1b0fe581cc02d6f6632a9ea262c0c6faa8e" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.622680 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.625550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerStarted","Data":"fc77b5533228813cbdf68f074d632723dfb4e0dc7c67db359ae1142e977c2c8c"} Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.672930 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.702660 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.715441 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:47:18 crc kubenswrapper[4858]: E1122 07:47:18.716152 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.716183 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.716621 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.717681 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.722012 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.722401 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.722687 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.727491 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.804202 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.857418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.857602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7bd\" (UniqueName: \"kubernetes.io/projected/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-kube-api-access-6d7bd\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.857673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.857729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.858169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.959447 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e2a207-d641-40d4-93a4-73bbacf1034f-logs\") pod \"d0e2a207-d641-40d4-93a4-73bbacf1034f\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.959591 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xv5q\" (UniqueName: \"kubernetes.io/projected/d0e2a207-d641-40d4-93a4-73bbacf1034f-kube-api-access-5xv5q\") pod \"d0e2a207-d641-40d4-93a4-73bbacf1034f\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.959783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-config-data\") pod \"d0e2a207-d641-40d4-93a4-73bbacf1034f\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.959821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-combined-ca-bundle\") pod \"d0e2a207-d641-40d4-93a4-73bbacf1034f\" (UID: \"d0e2a207-d641-40d4-93a4-73bbacf1034f\") " Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.960246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.960373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7bd\" (UniqueName: \"kubernetes.io/projected/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-kube-api-access-6d7bd\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.960453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.960508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.960579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.962835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e2a207-d641-40d4-93a4-73bbacf1034f-logs" (OuterVolumeSpecName: "logs") pod "d0e2a207-d641-40d4-93a4-73bbacf1034f" (UID: "d0e2a207-d641-40d4-93a4-73bbacf1034f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.975780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e2a207-d641-40d4-93a4-73bbacf1034f-kube-api-access-5xv5q" (OuterVolumeSpecName: "kube-api-access-5xv5q") pod "d0e2a207-d641-40d4-93a4-73bbacf1034f" (UID: "d0e2a207-d641-40d4-93a4-73bbacf1034f"). InnerVolumeSpecName "kube-api-access-5xv5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.976280 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.976357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.992830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:18 crc kubenswrapper[4858]: I1122 07:47:18.997122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7bd\" (UniqueName: \"kubernetes.io/projected/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-kube-api-access-6d7bd\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.012967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.020559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-config-data" (OuterVolumeSpecName: "config-data") pod "d0e2a207-d641-40d4-93a4-73bbacf1034f" (UID: "d0e2a207-d641-40d4-93a4-73bbacf1034f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.041637 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0e2a207-d641-40d4-93a4-73bbacf1034f" (UID: "d0e2a207-d641-40d4-93a4-73bbacf1034f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.063585 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0e2a207-d641-40d4-93a4-73bbacf1034f-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.063630 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xv5q\" (UniqueName: \"kubernetes.io/projected/d0e2a207-d641-40d4-93a4-73bbacf1034f-kube-api-access-5xv5q\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.063643 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.063652 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2a207-d641-40d4-93a4-73bbacf1034f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.132626 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.571917 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4" path="/var/lib/kubelet/pods/7ed79c6d-f9ec-4586-ac3c-8c2911c8deb4/volumes" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.654811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0e2a207-d641-40d4-93a4-73bbacf1034f","Type":"ContainerDied","Data":"796a6c9336b19b2f5cb038c450a4bcbd027c1073804e7268dadfb98813321432"} Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.654871 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.654881 4858 scope.go:117] "RemoveContainer" containerID="0fa5b0799c1ab37edcacaa7761400cfbd72a9c9bff76e4ad8ef5cba123f45786" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.659678 4858 generic.go:334] "Generic (PLEG): container finished" podID="c9eeefb0-28fa-4025-b31c-dd009f3921e1" containerID="847c3a6df4435d370e874a25826de76c5737e8e8b4a6447f6caf294fcf2286a4" exitCode=137 Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.659732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9eeefb0-28fa-4025-b31c-dd009f3921e1","Type":"ContainerDied","Data":"847c3a6df4435d370e874a25826de76c5737e8e8b4a6447f6caf294fcf2286a4"} Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.662760 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.734275 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.739307 4858 scope.go:117] "RemoveContainer" containerID="bc8a27eea352263f98fe25e76c1d957eeeb184f506f3aa6daf2e74580fb96d1b" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.770026 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.790169 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800099 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:19 crc kubenswrapper[4858]: E1122 07:47:19.800646 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-log" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800672 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-log" Nov 22 07:47:19 crc kubenswrapper[4858]: E1122 07:47:19.800693 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-api" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800700 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-api" Nov 22 07:47:19 crc kubenswrapper[4858]: E1122 07:47:19.800727 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9eeefb0-28fa-4025-b31c-dd009f3921e1" containerName="nova-scheduler-scheduler" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800735 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9eeefb0-28fa-4025-b31c-dd009f3921e1" containerName="nova-scheduler-scheduler" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800908 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-api" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800921 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" containerName="nova-api-log" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.800944 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9eeefb0-28fa-4025-b31c-dd009f3921e1" containerName="nova-scheduler-scheduler" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.801987 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.806772 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.807156 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.807215 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.835080 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.895462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7n2x\" (UniqueName: \"kubernetes.io/projected/c9eeefb0-28fa-4025-b31c-dd009f3921e1-kube-api-access-h7n2x\") pod \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.895649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-config-data\") pod \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.895725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-combined-ca-bundle\") pod \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\" (UID: \"c9eeefb0-28fa-4025-b31c-dd009f3921e1\") " Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.896089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.896139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-config-data\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.896171 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-logs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.896388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.896724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gcg\" (UniqueName: \"kubernetes.io/projected/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-kube-api-access-z4gcg\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.896766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.902845 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9eeefb0-28fa-4025-b31c-dd009f3921e1-kube-api-access-h7n2x" (OuterVolumeSpecName: "kube-api-access-h7n2x") pod "c9eeefb0-28fa-4025-b31c-dd009f3921e1" (UID: "c9eeefb0-28fa-4025-b31c-dd009f3921e1"). InnerVolumeSpecName "kube-api-access-h7n2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.925163 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-config-data" (OuterVolumeSpecName: "config-data") pod "c9eeefb0-28fa-4025-b31c-dd009f3921e1" (UID: "c9eeefb0-28fa-4025-b31c-dd009f3921e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.925704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9eeefb0-28fa-4025-b31c-dd009f3921e1" (UID: "c9eeefb0-28fa-4025-b31c-dd009f3921e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4gcg\" (UniqueName: \"kubernetes.io/projected/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-kube-api-access-z4gcg\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-config-data\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-logs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998542 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998559 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9eeefb0-28fa-4025-b31c-dd009f3921e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:19 crc kubenswrapper[4858]: I1122 07:47:19.998572 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7n2x\" (UniqueName: \"kubernetes.io/projected/c9eeefb0-28fa-4025-b31c-dd009f3921e1-kube-api-access-h7n2x\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:19.999771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-logs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.003706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.003719 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-config-data\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.004012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.008088 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.022102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4gcg\" (UniqueName: \"kubernetes.io/projected/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-kube-api-access-z4gcg\") pod \"nova-api-0\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.138308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.567986 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.676718 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.676796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9eeefb0-28fa-4025-b31c-dd009f3921e1","Type":"ContainerDied","Data":"97c80a4c4a84bcb12c0765629e61d03ea918dab22c9d286526dea8c20230022d"} Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.676865 4858 scope.go:117] "RemoveContainer" containerID="847c3a6df4435d370e874a25826de76c5737e8e8b4a6447f6caf294fcf2286a4" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.686841 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cf315c4-1d66-405c-b1a3-dffc4337cbdb","Type":"ContainerStarted","Data":"a798395ba6fb24efabd3ff008885a40eab0b416ce61db5470f0675f9224a52ee"} Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.703600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerStarted","Data":"77a8e04b7e81203285a77022d37d7a51c8dd64d5733e40c07d002eb29ba1b466"} Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.707088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d","Type":"ContainerStarted","Data":"6a71d1997501103b990d72ab680b1b604e4246555f70c6fd556826a4f81b697b"} Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.707142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d","Type":"ContainerStarted","Data":"441f63fc8e2358b6d2aee749b466d904915b1ae56f86d2c07b2deb13ab980ee1"} Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.746704 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.746669782 podStartE2EDuration="2.746669782s" podCreationTimestamp="2025-11-22 07:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:20.738332403 +0000 UTC m=+2202.579755429" watchObservedRunningTime="2025-11-22 07:47:20.746669782 +0000 UTC m=+2202.588092788" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.778705 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.813656 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.847192 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.850424 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.856442 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:47:20 crc kubenswrapper[4858]: I1122 07:47:20.870945 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.037771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.037831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-config-data\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.038030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpk7\" (UniqueName: \"kubernetes.io/projected/a65146ac-6c59-4ef0-a048-2e705c610e9b-kube-api-access-nkpk7\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.141798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkpk7\" (UniqueName: \"kubernetes.io/projected/a65146ac-6c59-4ef0-a048-2e705c610e9b-kube-api-access-nkpk7\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.141902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.141935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-config-data\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.154246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.157167 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-config-data\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.186089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkpk7\" (UniqueName: \"kubernetes.io/projected/a65146ac-6c59-4ef0-a048-2e705c610e9b-kube-api-access-nkpk7\") pod \"nova-scheduler-0\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.471504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.478209 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.578270 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9eeefb0-28fa-4025-b31c-dd009f3921e1" path="/var/lib/kubelet/pods/c9eeefb0-28fa-4025-b31c-dd009f3921e1/volumes" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.581066 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e2a207-d641-40d4-93a4-73bbacf1034f" path="/var/lib/kubelet/pods/d0e2a207-d641-40d4-93a4-73bbacf1034f/volumes" Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.607402 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-zvjrj"] Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.607783 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerName="dnsmasq-dns" containerID="cri-o://5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5" gracePeriod=10 Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.783843 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerStarted","Data":"dbd0e9bf232c8abbffd8d8b0cd99a0dd9d62d06a778533ea9599f74d7b16ff41"} Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.791040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cf315c4-1d66-405c-b1a3-dffc4337cbdb","Type":"ContainerStarted","Data":"abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812"} Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.791135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cf315c4-1d66-405c-b1a3-dffc4337cbdb","Type":"ContainerStarted","Data":"a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4"} Nov 22 07:47:21 crc kubenswrapper[4858]: I1122 07:47:21.822042 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.822007112 podStartE2EDuration="2.822007112s" podCreationTimestamp="2025-11-22 07:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:21.815590785 +0000 UTC m=+2203.657013791" watchObservedRunningTime="2025-11-22 07:47:21.822007112 +0000 UTC m=+2203.663430118" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.126572 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.444525 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.485304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-nb\") pod \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.485389 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-sb\") pod \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.485412 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fw7\" (UniqueName: \"kubernetes.io/projected/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-kube-api-access-p5fw7\") pod \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.485512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-config\") pod \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.485534 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-swift-storage-0\") pod \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.485579 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-svc\") pod \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\" (UID: \"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1\") " Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.499403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-kube-api-access-p5fw7" (OuterVolumeSpecName: "kube-api-access-p5fw7") pod "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" (UID: "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1"). InnerVolumeSpecName "kube-api-access-p5fw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.600124 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5fw7\" (UniqueName: \"kubernetes.io/projected/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-kube-api-access-p5fw7\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.654411 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" (UID: "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.654579 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" (UID: "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.654665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" (UID: "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.658064 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrczr"] Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.658365 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-config" (OuterVolumeSpecName: "config") pod "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" (UID: "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:22 crc kubenswrapper[4858]: E1122 07:47:22.658572 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerName="init" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.658591 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerName="init" Nov 22 07:47:22 crc kubenswrapper[4858]: E1122 07:47:22.658638 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerName="dnsmasq-dns" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.658647 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerName="dnsmasq-dns" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.658842 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerName="dnsmasq-dns" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.660505 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.695443 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrczr"] Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-utilities\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-catalog-content\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6cg\" (UniqueName: \"kubernetes.io/projected/d76f751f-087e-48d1-9f4c-3fe1e386edd8-kube-api-access-jx6cg\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702726 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702800 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702854 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.702869 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.723996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" (UID: "ba125f1b-20c4-4be0-a3e5-ed202f9f40f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.805226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-catalog-content\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.805479 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx6cg\" (UniqueName: \"kubernetes.io/projected/d76f751f-087e-48d1-9f4c-3fe1e386edd8-kube-api-access-jx6cg\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.805671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-utilities\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.806044 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.806896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-catalog-content\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.806997 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-utilities\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.807206 4858 generic.go:334] "Generic (PLEG): container finished" podID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" containerID="5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5" exitCode=0 Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.807350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" event={"ID":"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1","Type":"ContainerDied","Data":"5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5"} Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.807387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" event={"ID":"ba125f1b-20c4-4be0-a3e5-ed202f9f40f1","Type":"ContainerDied","Data":"fef2dc62a43da635cc7f550b7dc92a23423e0bee9acca763c25172d56d4b3aee"} Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.807408 4858 scope.go:117] "RemoveContainer" containerID="5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.807536 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-zvjrj" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.821472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a65146ac-6c59-4ef0-a048-2e705c610e9b","Type":"ContainerStarted","Data":"aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0"} Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.821522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a65146ac-6c59-4ef0-a048-2e705c610e9b","Type":"ContainerStarted","Data":"7d9f92289eeb1e362d090d09e732df545cc29c4c43887f5cbd5aad6f57813e69"} Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.838137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx6cg\" (UniqueName: \"kubernetes.io/projected/d76f751f-087e-48d1-9f4c-3fe1e386edd8-kube-api-access-jx6cg\") pod \"community-operators-jrczr\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.859878 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8598509549999998 podStartE2EDuration="2.859850955s" podCreationTimestamp="2025-11-22 07:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:22.851976441 +0000 UTC m=+2204.693399457" watchObservedRunningTime="2025-11-22 07:47:22.859850955 +0000 UTC m=+2204.701273961" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.872392 4858 scope.go:117] "RemoveContainer" containerID="2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.910573 4858 scope.go:117] "RemoveContainer" containerID="5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5" Nov 22 07:47:22 crc kubenswrapper[4858]: E1122 07:47:22.911291 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5\": container with ID starting with 5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5 not found: ID does not exist" containerID="5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.911344 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5"} err="failed to get container status \"5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5\": rpc error: code = NotFound desc = could not find container \"5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5\": container with ID starting with 5fee7610c86c9218292ef9c23290ce938daa8738a8a336ed741076c29b9f49d5 not found: ID does not exist" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.911375 4858 scope.go:117] "RemoveContainer" containerID="2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85" Nov 22 07:47:22 crc kubenswrapper[4858]: E1122 07:47:22.913385 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85\": container with ID starting with 2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85 not found: ID does not exist" containerID="2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.913431 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85"} err="failed to get container status \"2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85\": rpc error: code = NotFound desc = could not find container \"2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85\": container with ID starting with 2aac5cb0f4d4c3f095d5db2ddc4bf5c0f588709b8a5f665c309bad31c886fc85 not found: ID does not exist" Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.917703 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-zvjrj"] Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.933683 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-zvjrj"] Nov 22 07:47:22 crc kubenswrapper[4858]: I1122 07:47:22.986958 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.554027 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba125f1b-20c4-4be0-a3e5-ed202f9f40f1" path="/var/lib/kubelet/pods/ba125f1b-20c4-4be0-a3e5-ed202f9f40f1/volumes" Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.634299 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrczr"] Nov 22 07:47:23 crc kubenswrapper[4858]: W1122 07:47:23.641604 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd76f751f_087e_48d1_9f4c_3fe1e386edd8.slice/crio-6a9174c9922d27f271540e267aa5c62dc93cc28a024a97cd2af032f24cd62b69 WatchSource:0}: Error finding container 6a9174c9922d27f271540e267aa5c62dc93cc28a024a97cd2af032f24cd62b69: Status 404 returned error can't find the container with id 6a9174c9922d27f271540e267aa5c62dc93cc28a024a97cd2af032f24cd62b69 Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.833497 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerStarted","Data":"d9622634bc1174d49c5027b2be40e53bcfb960ed5f73042e382858cbc45fd0e6"} Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.833632 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-central-agent" containerID="cri-o://fc77b5533228813cbdf68f074d632723dfb4e0dc7c67db359ae1142e977c2c8c" gracePeriod=30 Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.833655 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="proxy-httpd" containerID="cri-o://d9622634bc1174d49c5027b2be40e53bcfb960ed5f73042e382858cbc45fd0e6" gracePeriod=30 Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.833729 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="sg-core" containerID="cri-o://dbd0e9bf232c8abbffd8d8b0cd99a0dd9d62d06a778533ea9599f74d7b16ff41" gracePeriod=30 Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.833770 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-notification-agent" containerID="cri-o://77a8e04b7e81203285a77022d37d7a51c8dd64d5733e40c07d002eb29ba1b466" gracePeriod=30 Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.834073 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:47:23 crc kubenswrapper[4858]: I1122 07:47:23.841409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerStarted","Data":"6a9174c9922d27f271540e267aa5c62dc93cc28a024a97cd2af032f24cd62b69"} Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.133893 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.860342 4858 generic.go:334] "Generic (PLEG): container finished" podID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerID="d9622634bc1174d49c5027b2be40e53bcfb960ed5f73042e382858cbc45fd0e6" exitCode=0 Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.860642 4858 generic.go:334] "Generic (PLEG): container finished" podID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerID="dbd0e9bf232c8abbffd8d8b0cd99a0dd9d62d06a778533ea9599f74d7b16ff41" exitCode=2 Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.860653 4858 generic.go:334] "Generic (PLEG): container finished" podID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerID="77a8e04b7e81203285a77022d37d7a51c8dd64d5733e40c07d002eb29ba1b466" exitCode=0 Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.860354 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerDied","Data":"d9622634bc1174d49c5027b2be40e53bcfb960ed5f73042e382858cbc45fd0e6"} Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.860731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerDied","Data":"dbd0e9bf232c8abbffd8d8b0cd99a0dd9d62d06a778533ea9599f74d7b16ff41"} Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.860766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerDied","Data":"77a8e04b7e81203285a77022d37d7a51c8dd64d5733e40c07d002eb29ba1b466"} Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.863344 4858 generic.go:334] "Generic (PLEG): container finished" podID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerID="2b0bc1d136d024ef0736db554a6dae961aec76f511890c1ed3515ad49fc86a55" exitCode=0 Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.863375 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerDied","Data":"2b0bc1d136d024ef0736db554a6dae961aec76f511890c1ed3515ad49fc86a55"} Nov 22 07:47:24 crc kubenswrapper[4858]: I1122 07:47:24.890012 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.881674754 podStartE2EDuration="9.889978006s" podCreationTimestamp="2025-11-22 07:47:15 +0000 UTC" firstStartedPulling="2025-11-22 07:47:16.785194098 +0000 UTC m=+2198.626617114" lastFinishedPulling="2025-11-22 07:47:22.79349736 +0000 UTC m=+2204.634920366" observedRunningTime="2025-11-22 07:47:23.874638437 +0000 UTC m=+2205.716061463" watchObservedRunningTime="2025-11-22 07:47:24.889978006 +0000 UTC m=+2206.731401012" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.004265 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8slpj"] Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.006488 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.018871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8slpj"] Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.062622 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-catalog-content\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.063160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-utilities\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.063286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwqft\" (UniqueName: \"kubernetes.io/projected/98fb3d0d-6a86-4e12-8e40-b60ab258b061-kube-api-access-bwqft\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.164815 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-utilities\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.164978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwqft\" (UniqueName: \"kubernetes.io/projected/98fb3d0d-6a86-4e12-8e40-b60ab258b061-kube-api-access-bwqft\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.165030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-catalog-content\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.165451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-catalog-content\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.165467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-utilities\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.189346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwqft\" (UniqueName: \"kubernetes.io/projected/98fb3d0d-6a86-4e12-8e40-b60ab258b061-kube-api-access-bwqft\") pod \"redhat-operators-8slpj\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.198365 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmnm5"] Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.201433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.229816 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmnm5"] Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.266139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-utilities\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.266233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tgct\" (UniqueName: \"kubernetes.io/projected/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-kube-api-access-2tgct\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.266265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-catalog-content\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.334572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.367939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tgct\" (UniqueName: \"kubernetes.io/projected/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-kube-api-access-2tgct\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.368656 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-catalog-content\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.368962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-utilities\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.369239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-catalog-content\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.369574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-utilities\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.394777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tgct\" (UniqueName: \"kubernetes.io/projected/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-kube-api-access-2tgct\") pod \"certified-operators-xmnm5\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.572198 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:25 crc kubenswrapper[4858]: I1122 07:47:25.960922 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8slpj"] Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.365251 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmnm5"] Nov 22 07:47:26 crc kubenswrapper[4858]: W1122 07:47:26.368353 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0d75a52_4ffd_49af_8567_0bdaa84d00f4.slice/crio-ce34ec8cba57d4426ee77405455a5a69ba32a3c8cdcfa255e8d17aa7c8c34d48 WatchSource:0}: Error finding container ce34ec8cba57d4426ee77405455a5a69ba32a3c8cdcfa255e8d17aa7c8c34d48: Status 404 returned error can't find the container with id ce34ec8cba57d4426ee77405455a5a69ba32a3c8cdcfa255e8d17aa7c8c34d48 Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.479587 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.961088 4858 generic.go:334] "Generic (PLEG): container finished" podID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerID="9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5" exitCode=0 Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.961232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerDied","Data":"9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5"} Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.961488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerStarted","Data":"ce34ec8cba57d4426ee77405455a5a69ba32a3c8cdcfa255e8d17aa7c8c34d48"} Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.972407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerStarted","Data":"1a99373127afabc28b7b252e68b078e1805098a8ed6569c042c75771663138c4"} Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.975674 4858 generic.go:334] "Generic (PLEG): container finished" podID="b100894f-375d-4d4f-9bfa-7c87e4db058d" containerID="e08b7a1f8e2e8f5bdd733d2f70df309fb24c38853d5d408bf801b16aee9f17da" exitCode=0 Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.975792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" event={"ID":"b100894f-375d-4d4f-9bfa-7c87e4db058d","Type":"ContainerDied","Data":"e08b7a1f8e2e8f5bdd733d2f70df309fb24c38853d5d408bf801b16aee9f17da"} Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.981505 4858 generic.go:334] "Generic (PLEG): container finished" podID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerID="e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124" exitCode=0 Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.981565 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerDied","Data":"e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124"} Nov 22 07:47:26 crc kubenswrapper[4858]: I1122 07:47:26.981596 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerStarted","Data":"bf39f8dbbf15429addd259d14d16451f12f3dfe5f50895c51c3e075ef516709a"} Nov 22 07:47:27 crc kubenswrapper[4858]: I1122 07:47:27.512458 4858 scope.go:117] "RemoveContainer" containerID="18b6954890d3d3bafc895e4e66130cfcc62022719c2856ccabe8b729e4c34b20" Nov 22 07:47:27 crc kubenswrapper[4858]: I1122 07:47:27.997366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerStarted","Data":"bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343"} Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.001386 4858 generic.go:334] "Generic (PLEG): container finished" podID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerID="1a99373127afabc28b7b252e68b078e1805098a8ed6569c042c75771663138c4" exitCode=0 Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.001494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerDied","Data":"1a99373127afabc28b7b252e68b078e1805098a8ed6569c042c75771663138c4"} Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.006940 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerStarted","Data":"99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd"} Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.628094 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.791093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-config-data\") pod \"b100894f-375d-4d4f-9bfa-7c87e4db058d\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.791281 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-scripts\") pod \"b100894f-375d-4d4f-9bfa-7c87e4db058d\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.791420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-combined-ca-bundle\") pod \"b100894f-375d-4d4f-9bfa-7c87e4db058d\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.792713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2hls\" (UniqueName: \"kubernetes.io/projected/b100894f-375d-4d4f-9bfa-7c87e4db058d-kube-api-access-j2hls\") pod \"b100894f-375d-4d4f-9bfa-7c87e4db058d\" (UID: \"b100894f-375d-4d4f-9bfa-7c87e4db058d\") " Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.799924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b100894f-375d-4d4f-9bfa-7c87e4db058d-kube-api-access-j2hls" (OuterVolumeSpecName: "kube-api-access-j2hls") pod "b100894f-375d-4d4f-9bfa-7c87e4db058d" (UID: "b100894f-375d-4d4f-9bfa-7c87e4db058d"). InnerVolumeSpecName "kube-api-access-j2hls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.816526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-scripts" (OuterVolumeSpecName: "scripts") pod "b100894f-375d-4d4f-9bfa-7c87e4db058d" (UID: "b100894f-375d-4d4f-9bfa-7c87e4db058d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.829196 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b100894f-375d-4d4f-9bfa-7c87e4db058d" (UID: "b100894f-375d-4d4f-9bfa-7c87e4db058d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.829543 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-config-data" (OuterVolumeSpecName: "config-data") pod "b100894f-375d-4d4f-9bfa-7c87e4db058d" (UID: "b100894f-375d-4d4f-9bfa-7c87e4db058d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.897449 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.897496 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.897510 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b100894f-375d-4d4f-9bfa-7c87e4db058d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:28 crc kubenswrapper[4858]: I1122 07:47:28.897521 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2hls\" (UniqueName: \"kubernetes.io/projected/b100894f-375d-4d4f-9bfa-7c87e4db058d-kube-api-access-j2hls\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.019587 4858 generic.go:334] "Generic (PLEG): container finished" podID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerID="99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd" exitCode=0 Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.019678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerDied","Data":"99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd"} Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.023265 4858 generic.go:334] "Generic (PLEG): container finished" podID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerID="fc77b5533228813cbdf68f074d632723dfb4e0dc7c67db359ae1142e977c2c8c" exitCode=0 Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.023296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerDied","Data":"fc77b5533228813cbdf68f074d632723dfb4e0dc7c67db359ae1142e977c2c8c"} Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.028559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" event={"ID":"b100894f-375d-4d4f-9bfa-7c87e4db058d","Type":"ContainerDied","Data":"af6d3de4e685072287796e93c3dcf30402423f9dd4b9fb96d0d39fa3b7a0699a"} Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.028603 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6d3de4e685072287796e93c3dcf30402423f9dd4b9fb96d0d39fa3b7a0699a" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.028674 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bwlrp" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.031941 4858 generic.go:334] "Generic (PLEG): container finished" podID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerID="bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343" exitCode=0 Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.032072 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerDied","Data":"bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343"} Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.134633 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.180294 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:47:29 crc kubenswrapper[4858]: E1122 07:47:29.180805 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b100894f-375d-4d4f-9bfa-7c87e4db058d" containerName="nova-cell1-conductor-db-sync" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.180826 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b100894f-375d-4d4f-9bfa-7c87e4db058d" containerName="nova-cell1-conductor-db-sync" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.181028 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b100894f-375d-4d4f-9bfa-7c87e4db058d" containerName="nova-cell1-conductor-db-sync" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.181766 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.184590 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.214024 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.214115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.215773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97hg\" (UniqueName: \"kubernetes.io/projected/d464fcfc-b91d-45e8-8c90-18083a632351-kube-api-access-t97hg\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.236188 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.317415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97hg\" (UniqueName: \"kubernetes.io/projected/d464fcfc-b91d-45e8-8c90-18083a632351-kube-api-access-t97hg\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.317541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.317567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.322593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.323004 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.338205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97hg\" (UniqueName: \"kubernetes.io/projected/d464fcfc-b91d-45e8-8c90-18083a632351-kube-api-access-t97hg\") pod \"nova-cell1-conductor-0\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.444209 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.534462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.908615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.932364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-sg-core-conf-yaml\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.932511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-combined-ca-bundle\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.932557 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9sxj\" (UniqueName: \"kubernetes.io/projected/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-kube-api-access-b9sxj\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.932644 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-scripts\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.932774 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-config-data\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.932835 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-log-httpd\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.934858 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.941238 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-kube-api-access-b9sxj" (OuterVolumeSpecName: "kube-api-access-b9sxj") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "kube-api-access-b9sxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.945016 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-scripts" (OuterVolumeSpecName: "scripts") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:29 crc kubenswrapper[4858]: I1122 07:47:29.979497 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.032852 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035177 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-run-httpd\") pod \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\" (UID: \"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8\") " Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035698 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035725 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035738 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035752 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9sxj\" (UniqueName: \"kubernetes.io/projected/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-kube-api-access-b9sxj\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035763 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.035794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.049080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7e9c4e3-7783-4338-b0c6-fe9adfff04a8","Type":"ContainerDied","Data":"1327329a5d9cb2680b17d972abd5c5a50aed89283b3f57779e1d9b5ce07b3f46"} Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.049139 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.049170 4858 scope.go:117] "RemoveContainer" containerID="d9622634bc1174d49c5027b2be40e53bcfb960ed5f73042e382858cbc45fd0e6" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.076751 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-config-data" (OuterVolumeSpecName: "config-data") pod "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" (UID: "c7e9c4e3-7783-4338-b0c6-fe9adfff04a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.080553 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.138223 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.138271 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.138699 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.138733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.262181 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.386360 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.407779 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.420744 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:30 crc kubenswrapper[4858]: E1122 07:47:30.421405 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="proxy-httpd" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421431 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="proxy-httpd" Nov 22 07:47:30 crc kubenswrapper[4858]: E1122 07:47:30.421477 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-central-agent" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421485 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-central-agent" Nov 22 07:47:30 crc kubenswrapper[4858]: E1122 07:47:30.421496 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-notification-agent" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421502 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-notification-agent" Nov 22 07:47:30 crc kubenswrapper[4858]: E1122 07:47:30.421518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="sg-core" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421526 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="sg-core" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421751 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-notification-agent" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421771 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="sg-core" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421789 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="proxy-httpd" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.421812 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" containerName="ceilometer-central-agent" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.424156 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.424846 4858 scope.go:117] "RemoveContainer" containerID="dbd0e9bf232c8abbffd8d8b0cd99a0dd9d62d06a778533ea9599f74d7b16ff41" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.428354 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.429155 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.433488 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.444488 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgc2\" (UniqueName: \"kubernetes.io/projected/99ca8988-bad7-40c3-8472-91390b87f8eb-kube-api-access-ghgc2\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.444544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.444585 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-log-httpd\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.444783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-scripts\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: W1122 07:47:30.444844 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd464fcfc_b91d_45e8_8c90_18083a632351.slice/crio-133fcd125d630e7c648acaa07e8b9906c1b90f0a9eed2f4758a1a964a962ce3a WatchSource:0}: Error finding container 133fcd125d630e7c648acaa07e8b9906c1b90f0a9eed2f4758a1a964a962ce3a: Status 404 returned error can't find the container with id 133fcd125d630e7c648acaa07e8b9906c1b90f0a9eed2f4758a1a964a962ce3a Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.444885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-config-data\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.444971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.445150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-run-httpd\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.478110 4858 scope.go:117] "RemoveContainer" containerID="77a8e04b7e81203285a77022d37d7a51c8dd64d5733e40c07d002eb29ba1b466" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-run-httpd\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgc2\" (UniqueName: \"kubernetes.io/projected/99ca8988-bad7-40c3-8472-91390b87f8eb-kube-api-access-ghgc2\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-log-httpd\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-scripts\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-config-data\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.546764 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-run-httpd\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.547152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-log-httpd\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.550477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-scripts\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.551567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-config-data\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.551753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.558711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.572363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgc2\" (UniqueName: \"kubernetes.io/projected/99ca8988-bad7-40c3-8472-91390b87f8eb-kube-api-access-ghgc2\") pod \"ceilometer-0\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.768577 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:30 crc kubenswrapper[4858]: I1122 07:47:30.851820 4858 scope.go:117] "RemoveContainer" containerID="fc77b5533228813cbdf68f074d632723dfb4e0dc7c67db359ae1142e977c2c8c" Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.079281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerStarted","Data":"144436047fa95058cf20fbc666e18cf422755a2e4432d6d52e8fbd462d758ae6"} Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.114497 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrczr" podStartSLOduration=4.257481666 podStartE2EDuration="9.114208154s" podCreationTimestamp="2025-11-22 07:47:22 +0000 UTC" firstStartedPulling="2025-11-22 07:47:24.86587593 +0000 UTC m=+2206.707298936" lastFinishedPulling="2025-11-22 07:47:29.722602418 +0000 UTC m=+2211.564025424" observedRunningTime="2025-11-22 07:47:31.107820419 +0000 UTC m=+2212.949243425" watchObservedRunningTime="2025-11-22 07:47:31.114208154 +0000 UTC m=+2212.955631190" Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.119710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d464fcfc-b91d-45e8-8c90-18083a632351","Type":"ContainerStarted","Data":"133fcd125d630e7c648acaa07e8b9906c1b90f0a9eed2f4758a1a964a962ce3a"} Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.177673 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.177751 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.418676 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:31 crc kubenswrapper[4858]: W1122 07:47:31.420081 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ca8988_bad7_40c3_8472_91390b87f8eb.slice/crio-d98f227313c167809e4a166b46efe40bb955c9c23f238525151725683cd1a312 WatchSource:0}: Error finding container d98f227313c167809e4a166b46efe40bb955c9c23f238525151725683cd1a312: Status 404 returned error can't find the container with id d98f227313c167809e4a166b46efe40bb955c9c23f238525151725683cd1a312 Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.479477 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.515676 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:47:31 crc kubenswrapper[4858]: I1122 07:47:31.554068 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e9c4e3-7783-4338-b0c6-fe9adfff04a8" path="/var/lib/kubelet/pods/c7e9c4e3-7783-4338-b0c6-fe9adfff04a8/volumes" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.137811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d464fcfc-b91d-45e8-8c90-18083a632351","Type":"ContainerStarted","Data":"1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442"} Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.138486 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.139388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerStarted","Data":"d98f227313c167809e4a166b46efe40bb955c9c23f238525151725683cd1a312"} Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.142356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerStarted","Data":"6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b"} Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.145458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerStarted","Data":"45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5"} Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.157194 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.157171022 podStartE2EDuration="3.157171022s" podCreationTimestamp="2025-11-22 07:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:32.153812144 +0000 UTC m=+2213.995235150" watchObservedRunningTime="2025-11-22 07:47:32.157171022 +0000 UTC m=+2213.998594028" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.183874 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8slpj" podStartSLOduration=4.301960368 podStartE2EDuration="8.18383813s" podCreationTimestamp="2025-11-22 07:47:24 +0000 UTC" firstStartedPulling="2025-11-22 07:47:26.98361518 +0000 UTC m=+2208.825038186" lastFinishedPulling="2025-11-22 07:47:30.865492942 +0000 UTC m=+2212.706915948" observedRunningTime="2025-11-22 07:47:32.17574374 +0000 UTC m=+2214.017166766" watchObservedRunningTime="2025-11-22 07:47:32.18383813 +0000 UTC m=+2214.025261136" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.197850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.202389 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmnm5" podStartSLOduration=3.313454498 podStartE2EDuration="7.202362646s" podCreationTimestamp="2025-11-22 07:47:25 +0000 UTC" firstStartedPulling="2025-11-22 07:47:26.962913464 +0000 UTC m=+2208.804336480" lastFinishedPulling="2025-11-22 07:47:30.851821622 +0000 UTC m=+2212.693244628" observedRunningTime="2025-11-22 07:47:32.194787013 +0000 UTC m=+2214.036210019" watchObservedRunningTime="2025-11-22 07:47:32.202362646 +0000 UTC m=+2214.043785652" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.987008 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:32 crc kubenswrapper[4858]: I1122 07:47:32.987182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:34 crc kubenswrapper[4858]: I1122 07:47:34.037802 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jrczr" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:34 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:34 crc kubenswrapper[4858]: > Nov 22 07:47:35 crc kubenswrapper[4858]: I1122 07:47:35.181032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerStarted","Data":"719ce04f259bde67ca27de4b703d29d13f81cd5271678321815500ca1f454c2f"} Nov 22 07:47:35 crc kubenswrapper[4858]: I1122 07:47:35.335259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:35 crc kubenswrapper[4858]: I1122 07:47:35.335371 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:47:35 crc kubenswrapper[4858]: I1122 07:47:35.573360 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:35 crc kubenswrapper[4858]: I1122 07:47:35.573691 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:36 crc kubenswrapper[4858]: I1122 07:47:36.214776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerStarted","Data":"957d69e53083464efbeaab072d7b960ee8b0643c6da7e4e2c055c636d912564d"} Nov 22 07:47:36 crc kubenswrapper[4858]: I1122 07:47:36.433273 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:36 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:36 crc kubenswrapper[4858]: > Nov 22 07:47:36 crc kubenswrapper[4858]: I1122 07:47:36.622530 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xmnm5" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:36 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:36 crc kubenswrapper[4858]: > Nov 22 07:47:37 crc kubenswrapper[4858]: I1122 07:47:37.227899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerStarted","Data":"d2c31b258cb073077717bdd0a5b6d24e092ef086f0ada42e9ec5f2dc118920cc"} Nov 22 07:47:38 crc kubenswrapper[4858]: I1122 07:47:38.242249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerStarted","Data":"4e96784f929d07be90ca54aa2680c48a1f8e47885717f2387f507e471e2f7bd0"} Nov 22 07:47:38 crc kubenswrapper[4858]: I1122 07:47:38.242989 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:47:38 crc kubenswrapper[4858]: I1122 07:47:38.278201 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.050650774 podStartE2EDuration="8.278171199s" podCreationTimestamp="2025-11-22 07:47:30 +0000 UTC" firstStartedPulling="2025-11-22 07:47:31.423787505 +0000 UTC m=+2213.265210511" lastFinishedPulling="2025-11-22 07:47:37.65130793 +0000 UTC m=+2219.492730936" observedRunningTime="2025-11-22 07:47:38.264730497 +0000 UTC m=+2220.106153503" watchObservedRunningTime="2025-11-22 07:47:38.278171199 +0000 UTC m=+2220.119594205" Nov 22 07:47:39 crc kubenswrapper[4858]: I1122 07:47:39.598784 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.156595 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gwqpg"] Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.158952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.164789 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.165153 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.172514 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gwqpg"] Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.173609 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.174614 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.191226 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-config-data\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.191723 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-scripts\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.191867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.192012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwvb\" (UniqueName: \"kubernetes.io/projected/cae6cc38-66f8-4e5a-9ce2-62a23de04553-kube-api-access-xmwvb\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.214487 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.294246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-config-data\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.294312 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-scripts\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.294452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.294565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwvb\" (UniqueName: \"kubernetes.io/projected/cae6cc38-66f8-4e5a-9ce2-62a23de04553-kube-api-access-xmwvb\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.305734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-config-data\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.309356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.311411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-scripts\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.324830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwvb\" (UniqueName: \"kubernetes.io/projected/cae6cc38-66f8-4e5a-9ce2-62a23de04553-kube-api-access-xmwvb\") pod \"nova-cell1-cell-mapping-gwqpg\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.387626 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:47:40 crc kubenswrapper[4858]: I1122 07:47:40.494003 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:41 crc kubenswrapper[4858]: I1122 07:47:41.125130 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gwqpg"] Nov 22 07:47:41 crc kubenswrapper[4858]: W1122 07:47:41.127339 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae6cc38_66f8_4e5a_9ce2_62a23de04553.slice/crio-2e357e9b2ca47c256cdb160b471cac067c96c9b8eee2ac0872177ee0d0694bb2 WatchSource:0}: Error finding container 2e357e9b2ca47c256cdb160b471cac067c96c9b8eee2ac0872177ee0d0694bb2: Status 404 returned error can't find the container with id 2e357e9b2ca47c256cdb160b471cac067c96c9b8eee2ac0872177ee0d0694bb2 Nov 22 07:47:41 crc kubenswrapper[4858]: I1122 07:47:41.289121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gwqpg" event={"ID":"cae6cc38-66f8-4e5a-9ce2-62a23de04553","Type":"ContainerStarted","Data":"2e357e9b2ca47c256cdb160b471cac067c96c9b8eee2ac0872177ee0d0694bb2"} Nov 22 07:47:41 crc kubenswrapper[4858]: I1122 07:47:41.290106 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:47:41 crc kubenswrapper[4858]: I1122 07:47:41.298997 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:47:42 crc kubenswrapper[4858]: I1122 07:47:42.314482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gwqpg" event={"ID":"cae6cc38-66f8-4e5a-9ce2-62a23de04553","Type":"ContainerStarted","Data":"85a50d90c74f2b5f201ff660a1988de49cf64fafadb3d13bf6855ce5e7b51da3"} Nov 22 07:47:42 crc kubenswrapper[4858]: I1122 07:47:42.340052 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gwqpg" podStartSLOduration=2.3400240119999998 podStartE2EDuration="2.340024012s" podCreationTimestamp="2025-11-22 07:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:42.33339704 +0000 UTC m=+2224.174820056" watchObservedRunningTime="2025-11-22 07:47:42.340024012 +0000 UTC m=+2224.181447018" Nov 22 07:47:44 crc kubenswrapper[4858]: I1122 07:47:44.043143 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jrczr" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:44 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:44 crc kubenswrapper[4858]: > Nov 22 07:47:45 crc kubenswrapper[4858]: I1122 07:47:45.627365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:45 crc kubenswrapper[4858]: I1122 07:47:45.679215 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:45 crc kubenswrapper[4858]: I1122 07:47:45.887513 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmnm5"] Nov 22 07:47:46 crc kubenswrapper[4858]: I1122 07:47:46.397252 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:46 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:46 crc kubenswrapper[4858]: > Nov 22 07:47:47 crc kubenswrapper[4858]: I1122 07:47:47.369028 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmnm5" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="registry-server" containerID="cri-o://6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b" gracePeriod=2 Nov 22 07:47:47 crc kubenswrapper[4858]: I1122 07:47:47.887087 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:47 crc kubenswrapper[4858]: I1122 07:47:47.974546 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-catalog-content\") pod \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " Nov 22 07:47:47 crc kubenswrapper[4858]: I1122 07:47:47.974780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tgct\" (UniqueName: \"kubernetes.io/projected/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-kube-api-access-2tgct\") pod \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " Nov 22 07:47:47 crc kubenswrapper[4858]: I1122 07:47:47.974833 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-utilities\") pod \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\" (UID: \"c0d75a52-4ffd-49af-8567-0bdaa84d00f4\") " Nov 22 07:47:47 crc kubenswrapper[4858]: I1122 07:47:47.983207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-utilities" (OuterVolumeSpecName: "utilities") pod "c0d75a52-4ffd-49af-8567-0bdaa84d00f4" (UID: "c0d75a52-4ffd-49af-8567-0bdaa84d00f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.005620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-kube-api-access-2tgct" (OuterVolumeSpecName: "kube-api-access-2tgct") pod "c0d75a52-4ffd-49af-8567-0bdaa84d00f4" (UID: "c0d75a52-4ffd-49af-8567-0bdaa84d00f4"). InnerVolumeSpecName "kube-api-access-2tgct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.035051 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0d75a52-4ffd-49af-8567-0bdaa84d00f4" (UID: "c0d75a52-4ffd-49af-8567-0bdaa84d00f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.078142 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tgct\" (UniqueName: \"kubernetes.io/projected/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-kube-api-access-2tgct\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.078226 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.078243 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d75a52-4ffd-49af-8567-0bdaa84d00f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.384831 4858 generic.go:334] "Generic (PLEG): container finished" podID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerID="6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b" exitCode=0 Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.384917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerDied","Data":"6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b"} Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.384978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmnm5" event={"ID":"c0d75a52-4ffd-49af-8567-0bdaa84d00f4","Type":"ContainerDied","Data":"ce34ec8cba57d4426ee77405455a5a69ba32a3c8cdcfa255e8d17aa7c8c34d48"} Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.385000 4858 scope.go:117] "RemoveContainer" containerID="6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.385084 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmnm5" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.403205 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gwqpg" event={"ID":"cae6cc38-66f8-4e5a-9ce2-62a23de04553","Type":"ContainerDied","Data":"85a50d90c74f2b5f201ff660a1988de49cf64fafadb3d13bf6855ce5e7b51da3"} Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.405198 4858 generic.go:334] "Generic (PLEG): container finished" podID="cae6cc38-66f8-4e5a-9ce2-62a23de04553" containerID="85a50d90c74f2b5f201ff660a1988de49cf64fafadb3d13bf6855ce5e7b51da3" exitCode=0 Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.458065 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmnm5"] Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.473574 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmnm5"] Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.477614 4858 scope.go:117] "RemoveContainer" containerID="99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.508951 4858 scope.go:117] "RemoveContainer" containerID="9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.559130 4858 scope.go:117] "RemoveContainer" containerID="6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b" Nov 22 07:47:48 crc kubenswrapper[4858]: E1122 07:47:48.559607 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b\": container with ID starting with 6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b not found: ID does not exist" containerID="6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.559639 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b"} err="failed to get container status \"6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b\": rpc error: code = NotFound desc = could not find container \"6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b\": container with ID starting with 6a21f964da9368e24177bf0ca14e660ce96962d2172bf7fd5fb88342d4cc058b not found: ID does not exist" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.559661 4858 scope.go:117] "RemoveContainer" containerID="99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd" Nov 22 07:47:48 crc kubenswrapper[4858]: E1122 07:47:48.560006 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd\": container with ID starting with 99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd not found: ID does not exist" containerID="99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.560032 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd"} err="failed to get container status \"99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd\": rpc error: code = NotFound desc = could not find container \"99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd\": container with ID starting with 99d3e655b31028339067f7496d12ab15a6c280f0847989925324e78965b9d1cd not found: ID does not exist" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.560086 4858 scope.go:117] "RemoveContainer" containerID="9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5" Nov 22 07:47:48 crc kubenswrapper[4858]: E1122 07:47:48.560409 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5\": container with ID starting with 9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5 not found: ID does not exist" containerID="9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5" Nov 22 07:47:48 crc kubenswrapper[4858]: I1122 07:47:48.560432 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5"} err="failed to get container status \"9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5\": rpc error: code = NotFound desc = could not find container \"9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5\": container with ID starting with 9492333b61cff2f4eb14328f03b7595dd40ccc75298acbefe440458286b16eb5 not found: ID does not exist" Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.557365 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" path="/var/lib/kubelet/pods/c0d75a52-4ffd-49af-8567-0bdaa84d00f4/volumes" Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.816699 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.924235 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-config-data\") pod \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.924309 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-scripts\") pod \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.924354 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-combined-ca-bundle\") pod \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.924408 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwvb\" (UniqueName: \"kubernetes.io/projected/cae6cc38-66f8-4e5a-9ce2-62a23de04553-kube-api-access-xmwvb\") pod \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\" (UID: \"cae6cc38-66f8-4e5a-9ce2-62a23de04553\") " Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.932681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-scripts" (OuterVolumeSpecName: "scripts") pod "cae6cc38-66f8-4e5a-9ce2-62a23de04553" (UID: "cae6cc38-66f8-4e5a-9ce2-62a23de04553"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.933176 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae6cc38-66f8-4e5a-9ce2-62a23de04553-kube-api-access-xmwvb" (OuterVolumeSpecName: "kube-api-access-xmwvb") pod "cae6cc38-66f8-4e5a-9ce2-62a23de04553" (UID: "cae6cc38-66f8-4e5a-9ce2-62a23de04553"). InnerVolumeSpecName "kube-api-access-xmwvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.964049 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae6cc38-66f8-4e5a-9ce2-62a23de04553" (UID: "cae6cc38-66f8-4e5a-9ce2-62a23de04553"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:49 crc kubenswrapper[4858]: I1122 07:47:49.964515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-config-data" (OuterVolumeSpecName: "config-data") pod "cae6cc38-66f8-4e5a-9ce2-62a23de04553" (UID: "cae6cc38-66f8-4e5a-9ce2-62a23de04553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.026808 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.026869 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.026883 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae6cc38-66f8-4e5a-9ce2-62a23de04553-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.026903 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmwvb\" (UniqueName: \"kubernetes.io/projected/cae6cc38-66f8-4e5a-9ce2-62a23de04553-kube-api-access-xmwvb\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.430233 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gwqpg" event={"ID":"cae6cc38-66f8-4e5a-9ce2-62a23de04553","Type":"ContainerDied","Data":"2e357e9b2ca47c256cdb160b471cac067c96c9b8eee2ac0872177ee0d0694bb2"} Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.430825 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e357e9b2ca47c256cdb160b471cac067c96c9b8eee2ac0872177ee0d0694bb2" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.430985 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gwqpg" Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.629170 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.629771 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-log" containerID="cri-o://a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4" gracePeriod=30 Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.629850 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-api" containerID="cri-o://abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812" gracePeriod=30 Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.702797 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.703156 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerName="nova-scheduler-scheduler" containerID="cri-o://aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" gracePeriod=30 Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.726049 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.726425 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-log" containerID="cri-o://2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9" gracePeriod=30 Nov 22 07:47:50 crc kubenswrapper[4858]: I1122 07:47:50.726597 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-metadata" containerID="cri-o://d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee" gracePeriod=30 Nov 22 07:47:51 crc kubenswrapper[4858]: I1122 07:47:51.444790 4858 generic.go:334] "Generic (PLEG): container finished" podID="e36700ca-f760-4ca3-9426-246466f122a6" containerID="2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9" exitCode=143 Nov 22 07:47:51 crc kubenswrapper[4858]: I1122 07:47:51.444916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e36700ca-f760-4ca3-9426-246466f122a6","Type":"ContainerDied","Data":"2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9"} Nov 22 07:47:51 crc kubenswrapper[4858]: I1122 07:47:51.448938 4858 generic.go:334] "Generic (PLEG): container finished" podID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerID="a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4" exitCode=143 Nov 22 07:47:51 crc kubenswrapper[4858]: I1122 07:47:51.449025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cf315c4-1d66-405c-b1a3-dffc4337cbdb","Type":"ContainerDied","Data":"a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4"} Nov 22 07:47:51 crc kubenswrapper[4858]: E1122 07:47:51.483515 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:47:51 crc kubenswrapper[4858]: E1122 07:47:51.486387 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:47:51 crc kubenswrapper[4858]: E1122 07:47:51.488663 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:47:51 crc kubenswrapper[4858]: E1122 07:47:51.488769 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerName="nova-scheduler-scheduler" Nov 22 07:47:53 crc kubenswrapper[4858]: I1122 07:47:53.055867 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:53 crc kubenswrapper[4858]: I1122 07:47:53.121175 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:53 crc kubenswrapper[4858]: I1122 07:47:53.813924 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrczr"] Nov 22 07:47:53 crc kubenswrapper[4858]: I1122 07:47:53.873911 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:40802->10.217.0.193:8775: read: connection reset by peer" Nov 22 07:47:53 crc kubenswrapper[4858]: I1122 07:47:53.874385 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:40816->10.217.0.193:8775: read: connection reset by peer" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.414556 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.428836 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.505771 4858 generic.go:334] "Generic (PLEG): container finished" podID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerID="abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812" exitCode=0 Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.505934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cf315c4-1d66-405c-b1a3-dffc4337cbdb","Type":"ContainerDied","Data":"abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812"} Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.506007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cf315c4-1d66-405c-b1a3-dffc4337cbdb","Type":"ContainerDied","Data":"a798395ba6fb24efabd3ff008885a40eab0b416ce61db5470f0675f9224a52ee"} Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.506035 4858 scope.go:117] "RemoveContainer" containerID="abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.506352 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.521766 4858 generic.go:334] "Generic (PLEG): container finished" podID="e36700ca-f760-4ca3-9426-246466f122a6" containerID="d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee" exitCode=0 Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.522423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e36700ca-f760-4ca3-9426-246466f122a6","Type":"ContainerDied","Data":"d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee"} Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.522529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e36700ca-f760-4ca3-9426-246466f122a6","Type":"ContainerDied","Data":"17f5463bfac4ad8ff9d2da6c4b7ad20898094d6d198c3fd4c8ce70e4a28422e3"} Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.522615 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrczr" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="registry-server" containerID="cri-o://144436047fa95058cf20fbc666e18cf422755a2e4432d6d52e8fbd462d758ae6" gracePeriod=2 Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.522616 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.548978 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-config-data\") pod \"e36700ca-f760-4ca3-9426-246466f122a6\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.554581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-combined-ca-bundle\") pod \"e36700ca-f760-4ca3-9426-246466f122a6\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555007 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36700ca-f760-4ca3-9426-246466f122a6-logs\") pod \"e36700ca-f760-4ca3-9426-246466f122a6\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-internal-tls-certs\") pod \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4gcg\" (UniqueName: \"kubernetes.io/projected/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-kube-api-access-z4gcg\") pod \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-public-tls-certs\") pod \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-logs\") pod \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-config-data\") pod \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.555903 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt58d\" (UniqueName: \"kubernetes.io/projected/e36700ca-f760-4ca3-9426-246466f122a6-kube-api-access-pt58d\") pod \"e36700ca-f760-4ca3-9426-246466f122a6\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.556023 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-nova-metadata-tls-certs\") pod \"e36700ca-f760-4ca3-9426-246466f122a6\" (UID: \"e36700ca-f760-4ca3-9426-246466f122a6\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.557049 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-combined-ca-bundle\") pod \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\" (UID: \"3cf315c4-1d66-405c-b1a3-dffc4337cbdb\") " Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.562390 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-logs" (OuterVolumeSpecName: "logs") pod "3cf315c4-1d66-405c-b1a3-dffc4337cbdb" (UID: "3cf315c4-1d66-405c-b1a3-dffc4337cbdb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.562633 4858 scope.go:117] "RemoveContainer" containerID="a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.562717 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36700ca-f760-4ca3-9426-246466f122a6-logs" (OuterVolumeSpecName: "logs") pod "e36700ca-f760-4ca3-9426-246466f122a6" (UID: "e36700ca-f760-4ca3-9426-246466f122a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.593263 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-kube-api-access-z4gcg" (OuterVolumeSpecName: "kube-api-access-z4gcg") pod "3cf315c4-1d66-405c-b1a3-dffc4337cbdb" (UID: "3cf315c4-1d66-405c-b1a3-dffc4337cbdb"). InnerVolumeSpecName "kube-api-access-z4gcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.602699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36700ca-f760-4ca3-9426-246466f122a6-kube-api-access-pt58d" (OuterVolumeSpecName: "kube-api-access-pt58d") pod "e36700ca-f760-4ca3-9426-246466f122a6" (UID: "e36700ca-f760-4ca3-9426-246466f122a6"). InnerVolumeSpecName "kube-api-access-pt58d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.638005 4858 scope.go:117] "RemoveContainer" containerID="abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.640288 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812\": container with ID starting with abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812 not found: ID does not exist" containerID="abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.640460 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812"} err="failed to get container status \"abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812\": rpc error: code = NotFound desc = could not find container \"abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812\": container with ID starting with abd7fd5a9fb9afddcc59764f7e03fa8a68da5ede5b9ff2feecbaf0c5fe5da812 not found: ID does not exist" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.640492 4858 scope.go:117] "RemoveContainer" containerID="a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.641099 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4\": container with ID starting with a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4 not found: ID does not exist" containerID="a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.641124 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4"} err="failed to get container status \"a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4\": rpc error: code = NotFound desc = could not find container \"a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4\": container with ID starting with a728bfee6570c6497c2ed0a87f4ac3774b2140fd87cc7cb1489db59cb31765f4 not found: ID does not exist" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.641138 4858 scope.go:117] "RemoveContainer" containerID="d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.651244 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cf315c4-1d66-405c-b1a3-dffc4337cbdb" (UID: "3cf315c4-1d66-405c-b1a3-dffc4337cbdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.662660 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.663212 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt58d\" (UniqueName: \"kubernetes.io/projected/e36700ca-f760-4ca3-9426-246466f122a6-kube-api-access-pt58d\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.663255 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.663268 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36700ca-f760-4ca3-9426-246466f122a6-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.663281 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4gcg\" (UniqueName: \"kubernetes.io/projected/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-kube-api-access-z4gcg\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.669441 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-config-data" (OuterVolumeSpecName: "config-data") pod "e36700ca-f760-4ca3-9426-246466f122a6" (UID: "e36700ca-f760-4ca3-9426-246466f122a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.688887 4858 scope.go:117] "RemoveContainer" containerID="2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.691054 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e36700ca-f760-4ca3-9426-246466f122a6" (UID: "e36700ca-f760-4ca3-9426-246466f122a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.698015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-config-data" (OuterVolumeSpecName: "config-data") pod "3cf315c4-1d66-405c-b1a3-dffc4337cbdb" (UID: "3cf315c4-1d66-405c-b1a3-dffc4337cbdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.698445 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3cf315c4-1d66-405c-b1a3-dffc4337cbdb" (UID: "3cf315c4-1d66-405c-b1a3-dffc4337cbdb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.712675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e36700ca-f760-4ca3-9426-246466f122a6" (UID: "e36700ca-f760-4ca3-9426-246466f122a6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.736832 4858 scope.go:117] "RemoveContainer" containerID="d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.737827 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee\": container with ID starting with d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee not found: ID does not exist" containerID="d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.737909 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee"} err="failed to get container status \"d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee\": rpc error: code = NotFound desc = could not find container \"d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee\": container with ID starting with d5264b86f39f3bb80d7b9537b5ac4e5a016ad367991e91c8230497433116a9ee not found: ID does not exist" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.737954 4858 scope.go:117] "RemoveContainer" containerID="2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.738736 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9\": container with ID starting with 2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9 not found: ID does not exist" containerID="2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.738790 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9"} err="failed to get container status \"2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9\": rpc error: code = NotFound desc = could not find container \"2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9\": container with ID starting with 2bc93dccf842a72e328696a112753a02d0557e8bf9d6965f5db144f31cbae8f9 not found: ID does not exist" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.739084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3cf315c4-1d66-405c-b1a3-dffc4337cbdb" (UID: "3cf315c4-1d66-405c-b1a3-dffc4337cbdb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.765808 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.765870 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.765887 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.765898 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.765909 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf315c4-1d66-405c-b1a3-dffc4337cbdb-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.765922 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36700ca-f760-4ca3-9426-246466f122a6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.856711 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.871917 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.888503 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.902699 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.915605 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916301 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="extract-content" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916347 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="extract-content" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916361 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="extract-utilities" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916370 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="extract-utilities" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916400 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-log" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-log" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916441 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-log" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916450 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-log" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916460 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="registry-server" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916468 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="registry-server" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916493 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae6cc38-66f8-4e5a-9ce2-62a23de04553" containerName="nova-manage" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916501 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae6cc38-66f8-4e5a-9ce2-62a23de04553" containerName="nova-manage" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-metadata" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916529 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-metadata" Nov 22 07:47:54 crc kubenswrapper[4858]: E1122 07:47:54.916561 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-api" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916571 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-api" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916839 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-api" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916866 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-log" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916885 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" containerName="nova-api-log" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916905 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d75a52-4ffd-49af-8567-0bdaa84d00f4" containerName="registry-server" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916916 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae6cc38-66f8-4e5a-9ce2-62a23de04553" containerName="nova-manage" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.916944 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36700ca-f760-4ca3-9426-246466f122a6" containerName="nova-metadata-metadata" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.918586 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.924039 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.924424 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.924790 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.927574 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.943990 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.946967 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.950465 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:47:54 crc kubenswrapper[4858]: I1122 07:47:54.954276 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.044673 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.072766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.072837 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.072883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqnm\" (UniqueName: \"kubernetes.io/projected/3d1176a9-f83c-4c6e-8436-60b9affe0857-kube-api-access-ptqnm\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.073251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.074532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1176a9-f83c-4c6e-8436-60b9affe0857-logs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.074744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9023aa66-975c-44c6-8aba-cff06211fd31-logs\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.074954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.075137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-config-data\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.075225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-config-data\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.075301 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.075404 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6p6v\" (UniqueName: \"kubernetes.io/projected/9023aa66-975c-44c6-8aba-cff06211fd31-kube-api-access-h6p6v\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1176a9-f83c-4c6e-8436-60b9affe0857-logs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9023aa66-975c-44c6-8aba-cff06211fd31-logs\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-config-data\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-config-data\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6p6v\" (UniqueName: \"kubernetes.io/projected/9023aa66-975c-44c6-8aba-cff06211fd31-kube-api-access-h6p6v\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptqnm\" (UniqueName: \"kubernetes.io/projected/3d1176a9-f83c-4c6e-8436-60b9affe0857-kube-api-access-ptqnm\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.177895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1176a9-f83c-4c6e-8436-60b9affe0857-logs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.178259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9023aa66-975c-44c6-8aba-cff06211fd31-logs\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.184670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-config-data\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.184814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.186455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-public-tls-certs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.187074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.190228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.190565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-config-data\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.199393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.203691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6p6v\" (UniqueName: \"kubernetes.io/projected/9023aa66-975c-44c6-8aba-cff06211fd31-kube-api-access-h6p6v\") pod \"nova-metadata-0\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.210202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptqnm\" (UniqueName: \"kubernetes.io/projected/3d1176a9-f83c-4c6e-8436-60b9affe0857-kube-api-access-ptqnm\") pod \"nova-api-0\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.320417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.336463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.592683 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf315c4-1d66-405c-b1a3-dffc4337cbdb" path="/var/lib/kubelet/pods/3cf315c4-1d66-405c-b1a3-dffc4337cbdb/volumes" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.595461 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36700ca-f760-4ca3-9426-246466f122a6" path="/var/lib/kubelet/pods/e36700ca-f760-4ca3-9426-246466f122a6/volumes" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.618681 4858 generic.go:334] "Generic (PLEG): container finished" podID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerID="144436047fa95058cf20fbc666e18cf422755a2e4432d6d52e8fbd462d758ae6" exitCode=0 Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.618761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerDied","Data":"144436047fa95058cf20fbc666e18cf422755a2e4432d6d52e8fbd462d758ae6"} Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.642403 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.689890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-catalog-content\") pod \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.691182 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-utilities\") pod \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.691350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx6cg\" (UniqueName: \"kubernetes.io/projected/d76f751f-087e-48d1-9f4c-3fe1e386edd8-kube-api-access-jx6cg\") pod \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\" (UID: \"d76f751f-087e-48d1-9f4c-3fe1e386edd8\") " Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.693559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-utilities" (OuterVolumeSpecName: "utilities") pod "d76f751f-087e-48d1-9f4c-3fe1e386edd8" (UID: "d76f751f-087e-48d1-9f4c-3fe1e386edd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.698803 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.710048 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76f751f-087e-48d1-9f4c-3fe1e386edd8-kube-api-access-jx6cg" (OuterVolumeSpecName: "kube-api-access-jx6cg") pod "d76f751f-087e-48d1-9f4c-3fe1e386edd8" (UID: "d76f751f-087e-48d1-9f4c-3fe1e386edd8"). InnerVolumeSpecName "kube-api-access-jx6cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.800783 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx6cg\" (UniqueName: \"kubernetes.io/projected/d76f751f-087e-48d1-9f4c-3fe1e386edd8-kube-api-access-jx6cg\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.812754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d76f751f-087e-48d1-9f4c-3fe1e386edd8" (UID: "d76f751f-087e-48d1-9f4c-3fe1e386edd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:55 crc kubenswrapper[4858]: I1122 07:47:55.910529 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76f751f-087e-48d1-9f4c-3fe1e386edd8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.021980 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.077904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:47:56 crc kubenswrapper[4858]: W1122 07:47:56.085733 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9023aa66_975c_44c6_8aba_cff06211fd31.slice/crio-46caad0f5e2f65b6abde3c2a0175fe0534b88fbe8205718d3265ca906ca0dd50 WatchSource:0}: Error finding container 46caad0f5e2f65b6abde3c2a0175fe0534b88fbe8205718d3265ca906ca0dd50: Status 404 returned error can't find the container with id 46caad0f5e2f65b6abde3c2a0175fe0534b88fbe8205718d3265ca906ca0dd50 Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.399769 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:56 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:56 crc kubenswrapper[4858]: > Nov 22 07:47:56 crc kubenswrapper[4858]: E1122 07:47:56.480421 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0 is running failed: container process not found" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:47:56 crc kubenswrapper[4858]: E1122 07:47:56.481073 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0 is running failed: container process not found" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:47:56 crc kubenswrapper[4858]: E1122 07:47:56.481693 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0 is running failed: container process not found" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:47:56 crc kubenswrapper[4858]: E1122 07:47:56.481782 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerName="nova-scheduler-scheduler" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.659790 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9023aa66-975c-44c6-8aba-cff06211fd31","Type":"ContainerStarted","Data":"c5ed51b8583e97f2df4a7b4d36a5dee9f21c7fa973fc8d4bfdf95afaa4f89084"} Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.659855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9023aa66-975c-44c6-8aba-cff06211fd31","Type":"ContainerStarted","Data":"46caad0f5e2f65b6abde3c2a0175fe0534b88fbe8205718d3265ca906ca0dd50"} Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.664114 4858 generic.go:334] "Generic (PLEG): container finished" podID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" exitCode=0 Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.664196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a65146ac-6c59-4ef0-a048-2e705c610e9b","Type":"ContainerDied","Data":"aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0"} Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.674274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrczr" event={"ID":"d76f751f-087e-48d1-9f4c-3fe1e386edd8","Type":"ContainerDied","Data":"6a9174c9922d27f271540e267aa5c62dc93cc28a024a97cd2af032f24cd62b69"} Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.674361 4858 scope.go:117] "RemoveContainer" containerID="144436047fa95058cf20fbc666e18cf422755a2e4432d6d52e8fbd462d758ae6" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.674632 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrczr" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.691474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d1176a9-f83c-4c6e-8436-60b9affe0857","Type":"ContainerStarted","Data":"a781dbc0e48e09fab41130a13b423bf6eab57d04da347dc3c059feb78f08659a"} Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.691575 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d1176a9-f83c-4c6e-8436-60b9affe0857","Type":"ContainerStarted","Data":"9e3ea62f498db52ef54c3c4128291426160d3444c41617d48cfac1068fd67616"} Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.723047 4858 scope.go:117] "RemoveContainer" containerID="1a99373127afabc28b7b252e68b078e1805098a8ed6569c042c75771663138c4" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.730625 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrczr"] Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.740825 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrczr"] Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.769290 4858 scope.go:117] "RemoveContainer" containerID="2b0bc1d136d024ef0736db554a6dae961aec76f511890c1ed3515ad49fc86a55" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.798486 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.838099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-config-data\") pod \"a65146ac-6c59-4ef0-a048-2e705c610e9b\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.846889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkpk7\" (UniqueName: \"kubernetes.io/projected/a65146ac-6c59-4ef0-a048-2e705c610e9b-kube-api-access-nkpk7\") pod \"a65146ac-6c59-4ef0-a048-2e705c610e9b\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.847044 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-combined-ca-bundle\") pod \"a65146ac-6c59-4ef0-a048-2e705c610e9b\" (UID: \"a65146ac-6c59-4ef0-a048-2e705c610e9b\") " Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.868031 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65146ac-6c59-4ef0-a048-2e705c610e9b-kube-api-access-nkpk7" (OuterVolumeSpecName: "kube-api-access-nkpk7") pod "a65146ac-6c59-4ef0-a048-2e705c610e9b" (UID: "a65146ac-6c59-4ef0-a048-2e705c610e9b"). InnerVolumeSpecName "kube-api-access-nkpk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.882088 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a65146ac-6c59-4ef0-a048-2e705c610e9b" (UID: "a65146ac-6c59-4ef0-a048-2e705c610e9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.907100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-config-data" (OuterVolumeSpecName: "config-data") pod "a65146ac-6c59-4ef0-a048-2e705c610e9b" (UID: "a65146ac-6c59-4ef0-a048-2e705c610e9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.949864 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.949921 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65146ac-6c59-4ef0-a048-2e705c610e9b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:56 crc kubenswrapper[4858]: I1122 07:47:56.949935 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkpk7\" (UniqueName: \"kubernetes.io/projected/a65146ac-6c59-4ef0-a048-2e705c610e9b-kube-api-access-nkpk7\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.549338 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" path="/var/lib/kubelet/pods/d76f751f-087e-48d1-9f4c-3fe1e386edd8/volumes" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.706022 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.706016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a65146ac-6c59-4ef0-a048-2e705c610e9b","Type":"ContainerDied","Data":"7d9f92289eeb1e362d090d09e732df545cc29c4c43887f5cbd5aad6f57813e69"} Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.706183 4858 scope.go:117] "RemoveContainer" containerID="aa36ec8975d5bbddb65e1beb870dced2be856c15770ef00b69caec563b5ce3e0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.721543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d1176a9-f83c-4c6e-8436-60b9affe0857","Type":"ContainerStarted","Data":"ba14a6eadf4f6ecaaaac7e03e75a0670b78a68e6d491fb4484cc6fca27e15f36"} Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.754330 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9023aa66-975c-44c6-8aba-cff06211fd31","Type":"ContainerStarted","Data":"5dae7ef1cf0b3974032face8f70aee5fa5e4c4f2e7d4ca85f75144f7a600b8fc"} Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.766205 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.796995 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.820380 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:57 crc kubenswrapper[4858]: E1122 07:47:57.821061 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerName="nova-scheduler-scheduler" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.821089 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerName="nova-scheduler-scheduler" Nov 22 07:47:57 crc kubenswrapper[4858]: E1122 07:47:57.821112 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="registry-server" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.821122 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="registry-server" Nov 22 07:47:57 crc kubenswrapper[4858]: E1122 07:47:57.821152 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="extract-utilities" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.821161 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="extract-utilities" Nov 22 07:47:57 crc kubenswrapper[4858]: E1122 07:47:57.821187 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="extract-content" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.821198 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="extract-content" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.821469 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" containerName="nova-scheduler-scheduler" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.821504 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76f751f-087e-48d1-9f4c-3fe1e386edd8" containerName="registry-server" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.822488 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.825938 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.830548 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.830509379 podStartE2EDuration="3.830509379s" podCreationTimestamp="2025-11-22 07:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:57.786062179 +0000 UTC m=+2239.627485195" watchObservedRunningTime="2025-11-22 07:47:57.830509379 +0000 UTC m=+2239.671932385" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.845693 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.851197 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.851162844 podStartE2EDuration="3.851162844s" podCreationTimestamp="2025-11-22 07:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:57.836107279 +0000 UTC m=+2239.677530295" watchObservedRunningTime="2025-11-22 07:47:57.851162844 +0000 UTC m=+2239.692585850" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.872450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sjdc\" (UniqueName: \"kubernetes.io/projected/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-kube-api-access-9sjdc\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.872653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-config-data\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.872709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.975751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.976073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sjdc\" (UniqueName: \"kubernetes.io/projected/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-kube-api-access-9sjdc\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.976195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-config-data\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.985703 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-config-data\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.992237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:57 crc kubenswrapper[4858]: I1122 07:47:57.999870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sjdc\" (UniqueName: \"kubernetes.io/projected/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-kube-api-access-9sjdc\") pod \"nova-scheduler-0\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " pod="openstack/nova-scheduler-0" Nov 22 07:47:58 crc kubenswrapper[4858]: I1122 07:47:58.153071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:47:58 crc kubenswrapper[4858]: I1122 07:47:58.684180 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:47:58 crc kubenswrapper[4858]: W1122 07:47:58.690115 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf89df03d_10c4_4a66_80dd_272c6ba5a2ae.slice/crio-d0287bf6a7ddad5b16f4720fa3c919be54358cab3a20ab87ee95e1e35b20b883 WatchSource:0}: Error finding container d0287bf6a7ddad5b16f4720fa3c919be54358cab3a20ab87ee95e1e35b20b883: Status 404 returned error can't find the container with id d0287bf6a7ddad5b16f4720fa3c919be54358cab3a20ab87ee95e1e35b20b883 Nov 22 07:47:58 crc kubenswrapper[4858]: I1122 07:47:58.769592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89df03d-10c4-4a66-80dd-272c6ba5a2ae","Type":"ContainerStarted","Data":"d0287bf6a7ddad5b16f4720fa3c919be54358cab3a20ab87ee95e1e35b20b883"} Nov 22 07:47:59 crc kubenswrapper[4858]: I1122 07:47:59.552993 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65146ac-6c59-4ef0-a048-2e705c610e9b" path="/var/lib/kubelet/pods/a65146ac-6c59-4ef0-a048-2e705c610e9b/volumes" Nov 22 07:47:59 crc kubenswrapper[4858]: I1122 07:47:59.785624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89df03d-10c4-4a66-80dd-272c6ba5a2ae","Type":"ContainerStarted","Data":"b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda"} Nov 22 07:48:00 crc kubenswrapper[4858]: I1122 07:48:00.336695 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:48:00 crc kubenswrapper[4858]: I1122 07:48:00.336793 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:48:00 crc kubenswrapper[4858]: I1122 07:48:00.774781 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:48:00 crc kubenswrapper[4858]: I1122 07:48:00.812606 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.8125711190000002 podStartE2EDuration="3.812571119s" podCreationTimestamp="2025-11-22 07:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:47:59.814752933 +0000 UTC m=+2241.656175949" watchObservedRunningTime="2025-11-22 07:48:00.812571119 +0000 UTC m=+2242.653994135" Nov 22 07:48:03 crc kubenswrapper[4858]: I1122 07:48:03.153861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.186803 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.187696 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0e1c8353-0669-44c9-840f-d1e30d3b51eb" containerName="kube-state-metrics" containerID="cri-o://9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04" gracePeriod=30 Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.321449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.322142 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.340604 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.340712 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.799802 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.862648 4858 generic.go:334] "Generic (PLEG): container finished" podID="0e1c8353-0669-44c9-840f-d1e30d3b51eb" containerID="9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04" exitCode=2 Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.862717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e1c8353-0669-44c9-840f-d1e30d3b51eb","Type":"ContainerDied","Data":"9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04"} Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.862760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e1c8353-0669-44c9-840f-d1e30d3b51eb","Type":"ContainerDied","Data":"d224b32ee12a3e577f7e4c0e2beda356aa76615e226a3fda5ec3893b43bc1c99"} Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.862724 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.862783 4858 scope.go:117] "RemoveContainer" containerID="9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.900055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhz6q\" (UniqueName: \"kubernetes.io/projected/0e1c8353-0669-44c9-840f-d1e30d3b51eb-kube-api-access-bhz6q\") pod \"0e1c8353-0669-44c9-840f-d1e30d3b51eb\" (UID: \"0e1c8353-0669-44c9-840f-d1e30d3b51eb\") " Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.911363 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1c8353-0669-44c9-840f-d1e30d3b51eb-kube-api-access-bhz6q" (OuterVolumeSpecName: "kube-api-access-bhz6q") pod "0e1c8353-0669-44c9-840f-d1e30d3b51eb" (UID: "0e1c8353-0669-44c9-840f-d1e30d3b51eb"). InnerVolumeSpecName "kube-api-access-bhz6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.934486 4858 scope.go:117] "RemoveContainer" containerID="9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04" Nov 22 07:48:05 crc kubenswrapper[4858]: E1122 07:48:05.935174 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04\": container with ID starting with 9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04 not found: ID does not exist" containerID="9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04" Nov 22 07:48:05 crc kubenswrapper[4858]: I1122 07:48:05.935237 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04"} err="failed to get container status \"9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04\": rpc error: code = NotFound desc = could not find container \"9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04\": container with ID starting with 9e9c0b72e108049d54864fc1d2a353a73e24d9b91dfc02ffa474d53c28875a04 not found: ID does not exist" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.007568 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhz6q\" (UniqueName: \"kubernetes.io/projected/0e1c8353-0669-44c9-840f-d1e30d3b51eb-kube-api-access-bhz6q\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.241439 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.261429 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.271900 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:48:06 crc kubenswrapper[4858]: E1122 07:48:06.272695 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1c8353-0669-44c9-840f-d1e30d3b51eb" containerName="kube-state-metrics" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.272724 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1c8353-0669-44c9-840f-d1e30d3b51eb" containerName="kube-state-metrics" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.273026 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1c8353-0669-44c9-840f-d1e30d3b51eb" containerName="kube-state-metrics" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.274157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.282967 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.283036 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.284038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.319573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.319649 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.319681 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.320215 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5tq\" (UniqueName: \"kubernetes.io/projected/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-api-access-rl5tq\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: E1122 07:48:06.362283 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e1c8353_0669_44c9_840f_d1e30d3b51eb.slice/crio-d224b32ee12a3e577f7e4c0e2beda356aa76615e226a3fda5ec3893b43bc1c99\": RecentStats: unable to find data in memory cache]" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.369707 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.369763 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.404821 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.404856 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.413066 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:06 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:06 crc kubenswrapper[4858]: > Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.423603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5tq\" (UniqueName: \"kubernetes.io/projected/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-api-access-rl5tq\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.423702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.423724 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.423743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.441952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.442549 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.451307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.458249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5tq\" (UniqueName: \"kubernetes.io/projected/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-api-access-rl5tq\") pod \"kube-state-metrics-0\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " pod="openstack/kube-state-metrics-0" Nov 22 07:48:06 crc kubenswrapper[4858]: I1122 07:48:06.619387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:48:07 crc kubenswrapper[4858]: I1122 07:48:07.187555 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:48:07 crc kubenswrapper[4858]: W1122 07:48:07.191648 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02115b03_d8fe_4334_96d6_cfbde07fd00a.slice/crio-b241bc88d77adca468e24f40e0373b856a4cbb502bda2b51bff1013ecb31da62 WatchSource:0}: Error finding container b241bc88d77adca468e24f40e0373b856a4cbb502bda2b51bff1013ecb31da62: Status 404 returned error can't find the container with id b241bc88d77adca468e24f40e0373b856a4cbb502bda2b51bff1013ecb31da62 Nov 22 07:48:07 crc kubenswrapper[4858]: I1122 07:48:07.552844 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1c8353-0669-44c9-840f-d1e30d3b51eb" path="/var/lib/kubelet/pods/0e1c8353-0669-44c9-840f-d1e30d3b51eb/volumes" Nov 22 07:48:07 crc kubenswrapper[4858]: I1122 07:48:07.906158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02115b03-d8fe-4334-96d6-cfbde07fd00a","Type":"ContainerStarted","Data":"b241bc88d77adca468e24f40e0373b856a4cbb502bda2b51bff1013ecb31da62"} Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.051772 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.052516 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="sg-core" containerID="cri-o://d2c31b258cb073077717bdd0a5b6d24e092ef086f0ada42e9ec5f2dc118920cc" gracePeriod=30 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.052532 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="proxy-httpd" containerID="cri-o://4e96784f929d07be90ca54aa2680c48a1f8e47885717f2387f507e471e2f7bd0" gracePeriod=30 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.052416 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-central-agent" containerID="cri-o://719ce04f259bde67ca27de4b703d29d13f81cd5271678321815500ca1f454c2f" gracePeriod=30 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.052671 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-notification-agent" containerID="cri-o://957d69e53083464efbeaab072d7b960ee8b0643c6da7e4e2c055c636d912564d" gracePeriod=30 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.154092 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.207491 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.926021 4858 generic.go:334] "Generic (PLEG): container finished" podID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerID="4e96784f929d07be90ca54aa2680c48a1f8e47885717f2387f507e471e2f7bd0" exitCode=0 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.926629 4858 generic.go:334] "Generic (PLEG): container finished" podID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerID="d2c31b258cb073077717bdd0a5b6d24e092ef086f0ada42e9ec5f2dc118920cc" exitCode=2 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.926646 4858 generic.go:334] "Generic (PLEG): container finished" podID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerID="719ce04f259bde67ca27de4b703d29d13f81cd5271678321815500ca1f454c2f" exitCode=0 Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.926070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerDied","Data":"4e96784f929d07be90ca54aa2680c48a1f8e47885717f2387f507e471e2f7bd0"} Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.926756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerDied","Data":"d2c31b258cb073077717bdd0a5b6d24e092ef086f0ada42e9ec5f2dc118920cc"} Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.926786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerDied","Data":"719ce04f259bde67ca27de4b703d29d13f81cd5271678321815500ca1f454c2f"} Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.930255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02115b03-d8fe-4334-96d6-cfbde07fd00a","Type":"ContainerStarted","Data":"d2fce1b7f44ee254502c1ee4737ddad02ab713e7ede13cb487c2720cd88d281e"} Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.974633 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.4088272330000002 podStartE2EDuration="2.974603097s" podCreationTimestamp="2025-11-22 07:48:06 +0000 UTC" firstStartedPulling="2025-11-22 07:48:07.195588277 +0000 UTC m=+2249.037011283" lastFinishedPulling="2025-11-22 07:48:07.761364151 +0000 UTC m=+2249.602787147" observedRunningTime="2025-11-22 07:48:08.952963371 +0000 UTC m=+2250.794386417" watchObservedRunningTime="2025-11-22 07:48:08.974603097 +0000 UTC m=+2250.816026103" Nov 22 07:48:08 crc kubenswrapper[4858]: I1122 07:48:08.983764 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:48:09 crc kubenswrapper[4858]: I1122 07:48:09.948980 4858 generic.go:334] "Generic (PLEG): container finished" podID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerID="957d69e53083464efbeaab072d7b960ee8b0643c6da7e4e2c055c636d912564d" exitCode=0 Nov 22 07:48:09 crc kubenswrapper[4858]: I1122 07:48:09.949275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerDied","Data":"957d69e53083464efbeaab072d7b960ee8b0643c6da7e4e2c055c636d912564d"} Nov 22 07:48:09 crc kubenswrapper[4858]: I1122 07:48:09.950886 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.313497 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.344915 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-config-data\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.345097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-sg-core-conf-yaml\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.346448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgc2\" (UniqueName: \"kubernetes.io/projected/99ca8988-bad7-40c3-8472-91390b87f8eb-kube-api-access-ghgc2\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.346587 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-combined-ca-bundle\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.346737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-scripts\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.346776 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-log-httpd\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.346902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-run-httpd\") pod \"99ca8988-bad7-40c3-8472-91390b87f8eb\" (UID: \"99ca8988-bad7-40c3-8472-91390b87f8eb\") " Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.350438 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.351619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.355728 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-scripts" (OuterVolumeSpecName: "scripts") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.382029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ca8988-bad7-40c3-8472-91390b87f8eb-kube-api-access-ghgc2" (OuterVolumeSpecName: "kube-api-access-ghgc2") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "kube-api-access-ghgc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.418642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.455705 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.455778 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgc2\" (UniqueName: \"kubernetes.io/projected/99ca8988-bad7-40c3-8472-91390b87f8eb-kube-api-access-ghgc2\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.455795 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.455806 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.455846 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99ca8988-bad7-40c3-8472-91390b87f8eb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.545275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-config-data" (OuterVolumeSpecName: "config-data") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.559425 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.583858 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99ca8988-bad7-40c3-8472-91390b87f8eb" (UID: "99ca8988-bad7-40c3-8472-91390b87f8eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.662847 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ca8988-bad7-40c3-8472-91390b87f8eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.974210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99ca8988-bad7-40c3-8472-91390b87f8eb","Type":"ContainerDied","Data":"d98f227313c167809e4a166b46efe40bb955c9c23f238525151725683cd1a312"} Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.974294 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4858]: I1122 07:48:10.974337 4858 scope.go:117] "RemoveContainer" containerID="4e96784f929d07be90ca54aa2680c48a1f8e47885717f2387f507e471e2f7bd0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.014379 4858 scope.go:117] "RemoveContainer" containerID="d2c31b258cb073077717bdd0a5b6d24e092ef086f0ada42e9ec5f2dc118920cc" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.046069 4858 scope.go:117] "RemoveContainer" containerID="957d69e53083464efbeaab072d7b960ee8b0643c6da7e4e2c055c636d912564d" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.047282 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.066060 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.097747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:11 crc kubenswrapper[4858]: E1122 07:48:11.098527 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-notification-agent" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.098552 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-notification-agent" Nov 22 07:48:11 crc kubenswrapper[4858]: E1122 07:48:11.098581 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-central-agent" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.098590 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-central-agent" Nov 22 07:48:11 crc kubenswrapper[4858]: E1122 07:48:11.098632 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="proxy-httpd" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.098642 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="proxy-httpd" Nov 22 07:48:11 crc kubenswrapper[4858]: E1122 07:48:11.098676 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="sg-core" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.098684 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="sg-core" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.110572 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-notification-agent" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.110685 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="ceilometer-central-agent" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.110748 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="sg-core" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.110767 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" containerName="proxy-httpd" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.131925 4858 scope.go:117] "RemoveContainer" containerID="719ce04f259bde67ca27de4b703d29d13f81cd5271678321815500ca1f454c2f" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.147172 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.151472 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.151544 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.175007 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-scripts\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189432 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86pf\" (UniqueName: \"kubernetes.io/projected/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-kube-api-access-z86pf\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189549 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-log-httpd\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-run-httpd\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-config-data\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.189722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.192613 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.292747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-log-httpd\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.293837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.294045 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-run-httpd\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.294313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-config-data\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.294513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.293552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-log-httpd\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.294679 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-run-httpd\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.294706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-scripts\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.295252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86pf\" (UniqueName: \"kubernetes.io/projected/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-kube-api-access-z86pf\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.295573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.305107 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.306035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.306155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-scripts\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.316946 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.318658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-config-data\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.321889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86pf\" (UniqueName: \"kubernetes.io/projected/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-kube-api-access-z86pf\") pod \"ceilometer-0\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.546218 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:11 crc kubenswrapper[4858]: I1122 07:48:11.557803 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ca8988-bad7-40c3-8472-91390b87f8eb" path="/var/lib/kubelet/pods/99ca8988-bad7-40c3-8472-91390b87f8eb/volumes" Nov 22 07:48:12 crc kubenswrapper[4858]: I1122 07:48:12.143962 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:12 crc kubenswrapper[4858]: W1122 07:48:12.146345 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ccd3a3a_6077_4b71_a6ac_a9289bb59b98.slice/crio-a53746aebccb4cd57132e990003d213470f70263021d3e96deb4e1b50fc1dcb9 WatchSource:0}: Error finding container a53746aebccb4cd57132e990003d213470f70263021d3e96deb4e1b50fc1dcb9: Status 404 returned error can't find the container with id a53746aebccb4cd57132e990003d213470f70263021d3e96deb4e1b50fc1dcb9 Nov 22 07:48:13 crc kubenswrapper[4858]: I1122 07:48:13.005387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerStarted","Data":"a53746aebccb4cd57132e990003d213470f70263021d3e96deb4e1b50fc1dcb9"} Nov 22 07:48:14 crc kubenswrapper[4858]: I1122 07:48:14.032709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerStarted","Data":"b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663"} Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.051114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerStarted","Data":"4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab"} Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.338583 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.340540 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.349134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.349513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.351417 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.357698 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:48:15 crc kubenswrapper[4858]: I1122 07:48:15.360932 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:48:16 crc kubenswrapper[4858]: I1122 07:48:16.066564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerStarted","Data":"d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d"} Nov 22 07:48:16 crc kubenswrapper[4858]: I1122 07:48:16.067777 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:48:16 crc kubenswrapper[4858]: I1122 07:48:16.074450 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:48:16 crc kubenswrapper[4858]: I1122 07:48:16.078773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:48:16 crc kubenswrapper[4858]: I1122 07:48:16.438694 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:16 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:16 crc kubenswrapper[4858]: > Nov 22 07:48:16 crc kubenswrapper[4858]: I1122 07:48:16.638097 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 07:48:17 crc kubenswrapper[4858]: I1122 07:48:17.086790 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerStarted","Data":"36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64"} Nov 22 07:48:17 crc kubenswrapper[4858]: I1122 07:48:17.121175 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.692338828 podStartE2EDuration="6.121147578s" podCreationTimestamp="2025-11-22 07:48:11 +0000 UTC" firstStartedPulling="2025-11-22 07:48:12.151148956 +0000 UTC m=+2253.992571962" lastFinishedPulling="2025-11-22 07:48:16.579957706 +0000 UTC m=+2258.421380712" observedRunningTime="2025-11-22 07:48:17.118113552 +0000 UTC m=+2258.959536568" watchObservedRunningTime="2025-11-22 07:48:17.121147578 +0000 UTC m=+2258.962570584" Nov 22 07:48:18 crc kubenswrapper[4858]: I1122 07:48:18.100155 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:48:26 crc kubenswrapper[4858]: I1122 07:48:26.379300 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:26 crc kubenswrapper[4858]: > Nov 22 07:48:35 crc kubenswrapper[4858]: I1122 07:48:35.405969 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:48:35 crc kubenswrapper[4858]: I1122 07:48:35.493665 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:48:35 crc kubenswrapper[4858]: I1122 07:48:35.658809 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8slpj"] Nov 22 07:48:37 crc kubenswrapper[4858]: I1122 07:48:37.308095 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8slpj" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" containerID="cri-o://45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5" gracePeriod=2 Nov 22 07:48:37 crc kubenswrapper[4858]: E1122 07:48:37.472718 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fb3d0d_6a86_4e12_8e40_b60ab258b061.slice/crio-45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:48:37 crc kubenswrapper[4858]: I1122 07:48:37.872166 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:48:37 crc kubenswrapper[4858]: I1122 07:48:37.993073 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-utilities\") pod \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " Nov 22 07:48:37 crc kubenswrapper[4858]: I1122 07:48:37.993403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwqft\" (UniqueName: \"kubernetes.io/projected/98fb3d0d-6a86-4e12-8e40-b60ab258b061-kube-api-access-bwqft\") pod \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " Nov 22 07:48:37 crc kubenswrapper[4858]: I1122 07:48:37.993457 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-catalog-content\") pod \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\" (UID: \"98fb3d0d-6a86-4e12-8e40-b60ab258b061\") " Nov 22 07:48:37 crc kubenswrapper[4858]: I1122 07:48:37.994590 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-utilities" (OuterVolumeSpecName: "utilities") pod "98fb3d0d-6a86-4e12-8e40-b60ab258b061" (UID: "98fb3d0d-6a86-4e12-8e40-b60ab258b061"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.002701 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fb3d0d-6a86-4e12-8e40-b60ab258b061-kube-api-access-bwqft" (OuterVolumeSpecName: "kube-api-access-bwqft") pod "98fb3d0d-6a86-4e12-8e40-b60ab258b061" (UID: "98fb3d0d-6a86-4e12-8e40-b60ab258b061"). InnerVolumeSpecName "kube-api-access-bwqft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.097086 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.097203 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwqft\" (UniqueName: \"kubernetes.io/projected/98fb3d0d-6a86-4e12-8e40-b60ab258b061-kube-api-access-bwqft\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.108720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98fb3d0d-6a86-4e12-8e40-b60ab258b061" (UID: "98fb3d0d-6a86-4e12-8e40-b60ab258b061"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.200075 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fb3d0d-6a86-4e12-8e40-b60ab258b061-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.325277 4858 generic.go:334] "Generic (PLEG): container finished" podID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerID="45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5" exitCode=0 Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.325368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerDied","Data":"45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5"} Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.325449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8slpj" event={"ID":"98fb3d0d-6a86-4e12-8e40-b60ab258b061","Type":"ContainerDied","Data":"bf39f8dbbf15429addd259d14d16451f12f3dfe5f50895c51c3e075ef516709a"} Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.325478 4858 scope.go:117] "RemoveContainer" containerID="45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.325403 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8slpj" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.371215 4858 scope.go:117] "RemoveContainer" containerID="bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.373654 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8slpj"] Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.387052 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8slpj"] Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.404970 4858 scope.go:117] "RemoveContainer" containerID="e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.458674 4858 scope.go:117] "RemoveContainer" containerID="45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5" Nov 22 07:48:38 crc kubenswrapper[4858]: E1122 07:48:38.459556 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5\": container with ID starting with 45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5 not found: ID does not exist" containerID="45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.459602 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5"} err="failed to get container status \"45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5\": rpc error: code = NotFound desc = could not find container \"45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5\": container with ID starting with 45b008d3ef2e559844822095239e0fe163eba3b43a6b744b0cc783012e5fe8c5 not found: ID does not exist" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.459634 4858 scope.go:117] "RemoveContainer" containerID="bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343" Nov 22 07:48:38 crc kubenswrapper[4858]: E1122 07:48:38.460190 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343\": container with ID starting with bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343 not found: ID does not exist" containerID="bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.460230 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343"} err="failed to get container status \"bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343\": rpc error: code = NotFound desc = could not find container \"bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343\": container with ID starting with bc2001c7e62a7ef6f8d970550d7b90a96288424df78efe4e687af50906d2b343 not found: ID does not exist" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.460256 4858 scope.go:117] "RemoveContainer" containerID="e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124" Nov 22 07:48:38 crc kubenswrapper[4858]: E1122 07:48:38.462029 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124\": container with ID starting with e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124 not found: ID does not exist" containerID="e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124" Nov 22 07:48:38 crc kubenswrapper[4858]: I1122 07:48:38.462109 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124"} err="failed to get container status \"e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124\": rpc error: code = NotFound desc = could not find container \"e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124\": container with ID starting with e6ffe7ad5ca702578559a4bef5a34b551e4eb8fbcbadbe5d0e376b5a70b0f124 not found: ID does not exist" Nov 22 07:48:39 crc kubenswrapper[4858]: I1122 07:48:39.553181 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" path="/var/lib/kubelet/pods/98fb3d0d-6a86-4e12-8e40-b60ab258b061/volumes" Nov 22 07:48:41 crc kubenswrapper[4858]: I1122 07:48:41.561279 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:49:02 crc kubenswrapper[4858]: I1122 07:49:02.443369 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 07:49:02 crc kubenswrapper[4858]: I1122 07:49:02.444166 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9ca29960-de06-4140-aba1-6f9279722ffe" containerName="openstackclient" containerID="cri-o://ebba2b48d81a716f8564ac05f4e67094d58682f34fbb3831a1e66df63c9f2817" gracePeriod=2 Nov 22 07:49:02 crc kubenswrapper[4858]: I1122 07:49:02.485614 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 07:49:02 crc kubenswrapper[4858]: I1122 07:49:02.585862 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:49:02 crc kubenswrapper[4858]: E1122 07:49:02.661236 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:02 crc kubenswrapper[4858]: E1122 07:49:02.661836 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data podName:2a92d321-46e4-4291-8ac3-fc8f039b3dcf nodeName:}" failed. No retries permitted until 2025-11-22 07:49:03.161816449 +0000 UTC m=+2305.003239455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data") pod "rabbitmq-cell1-server-0" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:02 crc kubenswrapper[4858]: I1122 07:49:02.959462 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:49:02 crc kubenswrapper[4858]: I1122 07:49:02.962199 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="openstack-network-exporter" containerID="cri-o://0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a" gracePeriod=300 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.004420 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.030684 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder1521-account-delete-m9vdj"] Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.031229 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="extract-content" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.031256 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="extract-content" Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.031306 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca29960-de06-4140-aba1-6f9279722ffe" containerName="openstackclient" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.031314 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca29960-de06-4140-aba1-6f9279722ffe" containerName="openstackclient" Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.031354 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="extract-utilities" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.031363 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="extract-utilities" Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.031380 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.031387 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.031659 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fb3d0d-6a86-4e12-8e40-b60ab258b061" containerName="registry-server" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.031680 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca29960-de06-4140-aba1-6f9279722ffe" containerName="openstackclient" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.033133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.057468 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder1521-account-delete-m9vdj"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.075479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465c8e4d-cc9e-406b-8460-41e83f1dfadb-operator-scripts\") pod \"cinder1521-account-delete-m9vdj\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.075682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rjw\" (UniqueName: \"kubernetes.io/projected/465c8e4d-cc9e-406b-8460-41e83f1dfadb-kube-api-access-m6rjw\") pod \"cinder1521-account-delete-m9vdj\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.122126 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance2343-account-delete-nsnxt"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.123902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.152399 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.152796 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="openstack-network-exporter" containerID="cri-o://3ad357e4d1993d5844d93e34adf01294d9318108dd28553ecd2102cef61ce78e" gracePeriod=300 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.177928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6rjw\" (UniqueName: \"kubernetes.io/projected/465c8e4d-cc9e-406b-8460-41e83f1dfadb-kube-api-access-m6rjw\") pod \"cinder1521-account-delete-m9vdj\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.178053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465c8e4d-cc9e-406b-8460-41e83f1dfadb-operator-scripts\") pod \"cinder1521-account-delete-m9vdj\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.179242 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.179297 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data podName:ddb1a203-c5d9-4ba5-b31b-c6134963af46 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:03.679277349 +0000 UTC m=+2305.520700355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data") pod "rabbitmq-server-0" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46") : configmap "rabbitmq-config-data" not found Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.180528 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.180645 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data podName:2a92d321-46e4-4291-8ac3-fc8f039b3dcf nodeName:}" failed. No retries permitted until 2025-11-22 07:49:04.180616731 +0000 UTC m=+2306.022039798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data") pod "rabbitmq-cell1-server-0" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.181398 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465c8e4d-cc9e-406b-8460-41e83f1dfadb-operator-scripts\") pod \"cinder1521-account-delete-m9vdj\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.184096 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance2343-account-delete-nsnxt"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.255938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6rjw\" (UniqueName: \"kubernetes.io/projected/465c8e4d-cc9e-406b-8460-41e83f1dfadb-kube-api-access-m6rjw\") pod \"cinder1521-account-delete-m9vdj\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.262235 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.262581 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="ovn-northd" containerID="cri-o://6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" gracePeriod=30 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.262728 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="openstack-network-exporter" containerID="cri-o://39fde520f058b73ce73c8fd11a8bfa24e055a38211b694c36194ba6867caba1b" gracePeriod=30 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.280236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq9g\" (UniqueName: \"kubernetes.io/projected/a04a3a5c-6169-4e97-a167-1c168a8d1690-kube-api-access-7rq9g\") pod \"glance2343-account-delete-nsnxt\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.280343 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04a3a5c-6169-4e97-a167-1c168a8d1690-operator-scripts\") pod \"glance2343-account-delete-nsnxt\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.294191 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutroneea0-account-delete-4d76b"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.296156 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.316434 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutroneea0-account-delete-4d76b"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.336579 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hfjnq"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.351369 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hfjnq"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.370739 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.384726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq9g\" (UniqueName: \"kubernetes.io/projected/a04a3a5c-6169-4e97-a167-1c168a8d1690-kube-api-access-7rq9g\") pod \"glance2343-account-delete-nsnxt\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.384785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcgc\" (UniqueName: \"kubernetes.io/projected/57b11c1e-be66-4546-bf19-b2a71c05256c-kube-api-access-dzcgc\") pod \"neutroneea0-account-delete-4d76b\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.384828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04a3a5c-6169-4e97-a167-1c168a8d1690-operator-scripts\") pod \"glance2343-account-delete-nsnxt\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.384924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts\") pod \"neutroneea0-account-delete-4d76b\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.385739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04a3a5c-6169-4e97-a167-1c168a8d1690-operator-scripts\") pod \"glance2343-account-delete-nsnxt\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.479440 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement9450-account-delete-9jrdm"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.481163 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.492691 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="ovsdbserver-sb" containerID="cri-o://e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80" gracePeriod=300 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.494375 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts\") pod \"neutroneea0-account-delete-4d76b\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.494551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcgc\" (UniqueName: \"kubernetes.io/projected/57b11c1e-be66-4546-bf19-b2a71c05256c-kube-api-access-dzcgc\") pod \"neutroneea0-account-delete-4d76b\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.496204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts\") pod \"neutroneea0-account-delete-4d76b\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.536775 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq9g\" (UniqueName: \"kubernetes.io/projected/a04a3a5c-6169-4e97-a167-1c168a8d1690-kube-api-access-7rq9g\") pod \"glance2343-account-delete-nsnxt\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.585047 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5e0507-55cd-49e4-bf31-1e13d0bfee53" path="/var/lib/kubelet/pods/6f5e0507-55cd-49e4-bf31-1e13d0bfee53/volumes" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.586225 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-djszx"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.608443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcgc\" (UniqueName: \"kubernetes.io/projected/57b11c1e-be66-4546-bf19-b2a71c05256c-kube-api-access-dzcgc\") pod \"neutroneea0-account-delete-4d76b\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.623454 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement9450-account-delete-9jrdm"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.634596 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqwh\" (UniqueName: \"kubernetes.io/projected/d8be274c-bb8a-43d2-8a56-dacb6789d343-kube-api-access-hfqwh\") pod \"placement9450-account-delete-9jrdm\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.634725 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8be274c-bb8a-43d2-8a56-dacb6789d343-operator-scripts\") pod \"placement9450-account-delete-9jrdm\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.647650 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-djszx"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.661640 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4c8pg"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.683545 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4c8pg"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.685143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.712870 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican0446-account-delete-s8t8x"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.714523 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.733118 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dmpsm"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.741441 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqwh\" (UniqueName: \"kubernetes.io/projected/d8be274c-bb8a-43d2-8a56-dacb6789d343-kube-api-access-hfqwh\") pod \"placement9450-account-delete-9jrdm\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.741531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8be274c-bb8a-43d2-8a56-dacb6789d343-operator-scripts\") pod \"placement9450-account-delete-9jrdm\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.742661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8be274c-bb8a-43d2-8a56-dacb6789d343-operator-scripts\") pod \"placement9450-account-delete-9jrdm\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.742731 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:49:03 crc kubenswrapper[4858]: E1122 07:49:03.742781 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data podName:ddb1a203-c5d9-4ba5-b31b-c6134963af46 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:04.742766509 +0000 UTC m=+2306.584189515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data") pod "rabbitmq-server-0" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46") : configmap "rabbitmq-config-data" not found Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.792661 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.811942 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-czmj7"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.824386 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqwh\" (UniqueName: \"kubernetes.io/projected/d8be274c-bb8a-43d2-8a56-dacb6789d343-kube-api-access-hfqwh\") pod \"placement9450-account-delete-9jrdm\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.830746 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="ovsdbserver-nb" containerID="cri-o://b418a78ed1ffafc15b6ad4bd4c7badd60596b1b40cbf746a619168a2e1a176d2" gracePeriod=300 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.845714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa36d9bc-2f0d-44bf-97d2-cc8785002875-operator-scripts\") pod \"barbican0446-account-delete-s8t8x\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.845767 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlfsz\" (UniqueName: \"kubernetes.io/projected/aa36d9bc-2f0d-44bf-97d2-cc8785002875-kube-api-access-nlfsz\") pod \"barbican0446-account-delete-s8t8x\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.864933 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8d445612-f1b5-47d6-b247-398725d6fe54/ovsdbserver-sb/0.log" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.864990 4858 generic.go:334] "Generic (PLEG): container finished" podID="8d445612-f1b5-47d6-b247-398725d6fe54" containerID="0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a" exitCode=2 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.865090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8d445612-f1b5-47d6-b247-398725d6fe54","Type":"ContainerDied","Data":"0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a"} Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.870151 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rkx92"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.908552 4858 generic.go:334] "Generic (PLEG): container finished" podID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerID="3ad357e4d1993d5844d93e34adf01294d9318108dd28553ecd2102cef61ce78e" exitCode=2 Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.908608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6","Type":"ContainerDied","Data":"3ad357e4d1993d5844d93e34adf01294d9318108dd28553ecd2102cef61ce78e"} Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.921717 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-czmj7"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.959275 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican0446-account-delete-s8t8x"] Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.960850 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.966747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlfsz\" (UniqueName: \"kubernetes.io/projected/aa36d9bc-2f0d-44bf-97d2-cc8785002875-kube-api-access-nlfsz\") pod \"barbican0446-account-delete-s8t8x\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.966804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa36d9bc-2f0d-44bf-97d2-cc8785002875-operator-scripts\") pod \"barbican0446-account-delete-s8t8x\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.968080 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa36d9bc-2f0d-44bf-97d2-cc8785002875-operator-scripts\") pod \"barbican0446-account-delete-s8t8x\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:03 crc kubenswrapper[4858]: I1122 07:49:03.983517 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dmpsm"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.072351 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rkx92"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.143232 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-kvhvp"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.143630 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerName="dnsmasq-dns" containerID="cri-o://d4513547d8a2ac717f8f1030a117cfe4b3acd8b155fe44df16b760b78b855132" gracePeriod=10 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.245050 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-gwqpg"] Nov 22 07:49:04 crc kubenswrapper[4858]: E1122 07:49:04.255915 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:04 crc kubenswrapper[4858]: E1122 07:49:04.256027 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data podName:2a92d321-46e4-4291-8ac3-fc8f039b3dcf nodeName:}" failed. No retries permitted until 2025-11-22 07:49:06.255988662 +0000 UTC m=+2308.097411668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data") pod "rabbitmq-cell1-server-0" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.273542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlfsz\" (UniqueName: \"kubernetes.io/projected/aa36d9bc-2f0d-44bf-97d2-cc8785002875-kube-api-access-nlfsz\") pod \"barbican0446-account-delete-s8t8x\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.297597 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-gwqpg"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.374911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.391277 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gdhvl"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.403168 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gdhvl"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.423731 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rm92c"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.437880 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell04e0e-account-delete-lp8zr"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.441304 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.462509 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-xbvdl"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.482905 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-fpwcs"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.483650 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-fpwcs" podUID="56c36de6-d90c-48e1-bfda-466b3818ed61" containerName="openstack-network-exporter" containerID="cri-o://a6a5144c8bf6ebe7111561582ed87111819f209ed3451b8464f722f9db2ae3c2" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.531171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell04e0e-account-delete-lp8zr"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.568103 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.568504 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="cinder-scheduler" containerID="cri-o://9a6bd0f287f81f32a2ecd007606ab984ffaa840e52edd920e197cf1530362f85" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.569114 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="probe" containerID="cri-o://a264ec2d1761e844139d64f8cfd921295756c1e25bdc2ca727b8eecd6b023c10" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.600402 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67djw\" (UniqueName: \"kubernetes.io/projected/31c63759-4028-4b22-acb3-c9c78f9cbfce-kube-api-access-67djw\") pod \"novacell04e0e-account-delete-lp8zr\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.600522 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31c63759-4028-4b22-acb3-c9c78f9cbfce-operator-scripts\") pod \"novacell04e0e-account-delete-lp8zr\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.616869 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.617349 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api-log" containerID="cri-o://ab0829a2a45dd01e2464217508ca78a6a10b04e91b998fe9def047ef8aebbd38" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.617527 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api" containerID="cri-o://98e56862b8436374df28d433ceba2eba7598bc78c7a8982ea0f1f152b99d551a" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.663872 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapid4cc-account-delete-5tdjd"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.665978 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.702587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964bc658-f627-428c-9dbd-dd640e9394bc-operator-scripts\") pod \"novaapid4cc-account-delete-5tdjd\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.703008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31c63759-4028-4b22-acb3-c9c78f9cbfce-operator-scripts\") pod \"novacell04e0e-account-delete-lp8zr\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.703198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnv7\" (UniqueName: \"kubernetes.io/projected/964bc658-f627-428c-9dbd-dd640e9394bc-kube-api-access-sbnv7\") pod \"novaapid4cc-account-delete-5tdjd\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.703337 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67djw\" (UniqueName: \"kubernetes.io/projected/31c63759-4028-4b22-acb3-c9c78f9cbfce-kube-api-access-67djw\") pod \"novacell04e0e-account-delete-lp8zr\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.704970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31c63759-4028-4b22-acb3-c9c78f9cbfce-operator-scripts\") pod \"novacell04e0e-account-delete-lp8zr\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.716693 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.717548 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-server" containerID="cri-o://49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.717871 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-updater" containerID="cri-o://93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718063 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="swift-recon-cron" containerID="cri-o://6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718111 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="rsync" containerID="cri-o://db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718147 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-expirer" containerID="cri-o://ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718162 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-auditor" containerID="cri-o://dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718181 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-updater" containerID="cri-o://6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718177 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-reaper" containerID="cri-o://c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718214 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-auditor" containerID="cri-o://d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718243 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-replicator" containerID="cri-o://73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718260 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-replicator" containerID="cri-o://4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718350 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-auditor" containerID="cri-o://a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718381 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-replicator" containerID="cri-o://75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718401 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-server" containerID="cri-o://5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.718314 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-server" containerID="cri-o://51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.731548 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapid4cc-account-delete-5tdjd"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.735670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67djw\" (UniqueName: \"kubernetes.io/projected/31c63759-4028-4b22-acb3-c9c78f9cbfce-kube-api-access-67djw\") pod \"novacell04e0e-account-delete-lp8zr\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.749200 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.784595 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.784995 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-log" containerID="cri-o://df6351eb07779190404e7510c779e428984d0ff82f2f65b8c045ff400d0f540b" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.785709 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-httpd" containerID="cri-o://358f5eea1c33599a6ff9d0f49219f36c9849f142f1d83d32c74db35d272f5419" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.815199 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964bc658-f627-428c-9dbd-dd640e9394bc-operator-scripts\") pod \"novaapid4cc-account-delete-5tdjd\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.815519 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnv7\" (UniqueName: \"kubernetes.io/projected/964bc658-f627-428c-9dbd-dd640e9394bc-kube-api-access-sbnv7\") pod \"novaapid4cc-account-delete-5tdjd\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: E1122 07:49:04.816214 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:49:04 crc kubenswrapper[4858]: E1122 07:49:04.816274 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data podName:ddb1a203-c5d9-4ba5-b31b-c6134963af46 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:06.81625718 +0000 UTC m=+2308.657680196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data") pod "rabbitmq-server-0" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46") : configmap "rabbitmq-config-data" not found Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.819751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964bc658-f627-428c-9dbd-dd640e9394bc-operator-scripts\") pod \"novaapid4cc-account-delete-5tdjd\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.823471 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56cfd7c4f7-gvswl"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.823829 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56cfd7c4f7-gvswl" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-api" containerID="cri-o://721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.826614 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56cfd7c4f7-gvswl" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-httpd" containerID="cri-o://74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.844546 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="rabbitmq" containerID="cri-o://fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff" gracePeriod=604800 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.852162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnv7\" (UniqueName: \"kubernetes.io/projected/964bc658-f627-428c-9dbd-dd640e9394bc-kube-api-access-sbnv7\") pod \"novaapid4cc-account-delete-5tdjd\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.866802 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.869597 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-log" containerID="cri-o://a0adda39f79e6c29822139189a3c320fe6ee86b411f22c91f3e5eaceb048c381" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.870220 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-httpd" containerID="cri-o://51604b1dd7eceb22876c5f2824f93728dd6ccb3368e18bfb5bdbfd78f9ae8589" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.891754 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f787bd646-rhtm4"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.892071 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f787bd646-rhtm4" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-log" containerID="cri-o://df637c4bab3b1c089c9ad8726c02b0cd45f173fc27bc1d9048018902900124ab" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.892242 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f787bd646-rhtm4" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-api" containerID="cri-o://e3acbe684a3b1cf56d9ce339047e865b4bf5f7e2b06b06679ba47e5ef77b37e7" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.960911 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8d445612-f1b5-47d6-b247-398725d6fe54/ovsdbserver-sb/0.log" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.961048 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.966136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.967496 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6547bffc85-6ngjc"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.967896 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6547bffc85-6ngjc" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-httpd" containerID="cri-o://eb00d0789abf04eee5762b9ee56aabc63f0f1c94ae705447bb35180a0e8b87ca" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.968074 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6547bffc85-6ngjc" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-server" containerID="cri-o://9a6d75795ae8232c4383f35b0f34c6eed669006f1c3713b8cda81bf151289e3c" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.984271 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:49:04 crc kubenswrapper[4858]: I1122 07:49:04.998060 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.002005 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5744c7f6cf-flhrq"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.002337 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5744c7f6cf-flhrq" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker-log" containerID="cri-o://0fc2e8610b309ec2b9325b8a5fb9a64e0de3f594df62b7a0fe26ced79e91e89c" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.003837 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5744c7f6cf-flhrq" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker" containerID="cri-o://6bf2d7b9ad4531e14c9327a6a63588e930346a2e2dcae212eff919b9b5b4719c" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.027026 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-964b97968-m9n7r"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.027414 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-964b97968-m9n7r" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" containerID="cri-o://3da885cb1a497446e4704b17b4b8aaf873885fce07483c60700f3f890b5ad6e2" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdbserver-sb-tls-certs\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-scripts\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-combined-ca-bundle\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9hk\" (UniqueName: \"kubernetes.io/projected/8d445612-f1b5-47d6-b247-398725d6fe54-kube-api-access-ck9hk\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-metrics-certs-tls-certs\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdb-rundir\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.029893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-config\") pod \"8d445612-f1b5-47d6-b247-398725d6fe54\" (UID: \"8d445612-f1b5-47d6-b247-398725d6fe54\") " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.030796 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-964b97968-m9n7r" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api" containerID="cri-o://7d85dd2bf391a295963c1c04a60ba1230b2aacca17a1680433770b7be5c7e8c8" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.032816 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-config" (OuterVolumeSpecName: "config") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.035860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.040054 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-scripts" (OuterVolumeSpecName: "scripts") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.047990 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.048805 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57cdc95956-lbjhn"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.049102 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener-log" containerID="cri-o://06711e654f6c8f43dfb70d0e3d0cf613ddc8ac0aa5d4281e2d0aea5c99c77349" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.050130 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener" containerID="cri-o://459ed18256c6e74e65f42b2044fae1a1c6a3d48927d45cffc496a022915a3956" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.060292 4858 generic.go:334] "Generic (PLEG): container finished" podID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerID="df6351eb07779190404e7510c779e428984d0ff82f2f65b8c045ff400d0f540b" exitCode=143 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.060438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af987998-e4fb-4798-aaf5-6cb5f6a4670e","Type":"ContainerDied","Data":"df6351eb07779190404e7510c779e428984d0ff82f2f65b8c045ff400d0f540b"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.068577 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d445612-f1b5-47d6-b247-398725d6fe54-kube-api-access-ck9hk" (OuterVolumeSpecName: "kube-api-access-ck9hk") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "kube-api-access-ck9hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.078704 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.079015 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" containerID="cri-o://c5ed51b8583e97f2df4a7b4d36a5dee9f21c7fa973fc8d4bfdf95afaa4f89084" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.079762 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" containerID="cri-o://5dae7ef1cf0b3974032face8f70aee5fa5e4c4f2e7d4ca85f75144f7a600b8fc" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.110580 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.110958 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-log" containerID="cri-o://a781dbc0e48e09fab41130a13b423bf6eab57d04da347dc3c059feb78f08659a" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.111523 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-api" containerID="cri-o://ba14a6eadf4f6ecaaaac7e03e75a0670b78a68e6d491fb4484cc6fca27e15f36" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.132853 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.132899 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.132913 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d445612-f1b5-47d6-b247-398725d6fe54-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.132925 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck9hk\" (UniqueName: \"kubernetes.io/projected/8d445612-f1b5-47d6-b247-398725d6fe54-kube-api-access-ck9hk\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.132954 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.137543 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xgbvx"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.146985 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xgbvx"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.154632 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.154976 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6a71d1997501103b990d72ab680b1b604e4246555f70c6fd556826a4f81b697b" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.165051 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1968-account-create-h6ffw"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.175751 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.201520 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1968-account-create-h6ffw"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.225100 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.225478 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f89df03d-10c4-4a66-80dd-272c6ba5a2ae" containerName="nova-scheduler-scheduler" containerID="cri-o://b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.238805 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6/ovsdbserver-nb/0.log" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.239408 4858 generic.go:334] "Generic (PLEG): container finished" podID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerID="b418a78ed1ffafc15b6ad4bd4c7badd60596b1b40cbf746a619168a2e1a176d2" exitCode=143 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.239458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6","Type":"ContainerDied","Data":"b418a78ed1ffafc15b6ad4bd4c7badd60596b1b40cbf746a619168a2e1a176d2"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.251535 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.251852 4858 generic.go:334] "Generic (PLEG): container finished" podID="daa57087-ec21-4cff-aa47-68358e8f5039" containerID="ab0829a2a45dd01e2464217508ca78a6a10b04e91b998fe9def047ef8aebbd38" exitCode=143 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.251910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"daa57087-ec21-4cff-aa47-68358e8f5039","Type":"ContainerDied","Data":"ab0829a2a45dd01e2464217508ca78a6a10b04e91b998fe9def047ef8aebbd38"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.252502 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hrts7"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.271940 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.278616 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="rabbitmq" containerID="cri-o://87dc9b2e06bc62a486c9c4668b5e0075930637436dc360e930cf4a1288e9f350" gracePeriod=604800 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.306157 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.308484 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" containerName="nova-cell0-conductor-conductor" containerID="cri-o://41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.351241 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.351672 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393778 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393815 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393823 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393829 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393836 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393842 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393848 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393931 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.393997 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.394008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.394017 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.394026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.433439 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hrts7"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.457110 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8d445612-f1b5-47d6-b247-398725d6fe54/ovsdbserver-sb/0.log" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.457367 4858 generic.go:334] "Generic (PLEG): container finished" podID="8d445612-f1b5-47d6-b247-398725d6fe54" containerID="e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80" exitCode=143 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.457960 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.458582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8d445612-f1b5-47d6-b247-398725d6fe54","Type":"ContainerDied","Data":"e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.458680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8d445612-f1b5-47d6-b247-398725d6fe54","Type":"ContainerDied","Data":"9f93c99ee45d764031d887fe389f340c77d22607f36c1565052b6e3f993c17f6"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.458741 4858 scope.go:117] "RemoveContainer" containerID="0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.490193 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bwlrp"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.507398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.527874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8d445612-f1b5-47d6-b247-398725d6fe54" (UID: "8d445612-f1b5-47d6-b247-398725d6fe54"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.550119 4858 generic.go:334] "Generic (PLEG): container finished" podID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerID="d4513547d8a2ac717f8f1030a117cfe4b3acd8b155fe44df16b760b78b855132" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.558543 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.558585 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d445612-f1b5-47d6-b247-398725d6fe54-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.574799 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fpwcs_56c36de6-d90c-48e1-bfda-466b3818ed61/openstack-network-exporter/0.log" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.574871 4858 generic.go:334] "Generic (PLEG): container finished" podID="56c36de6-d90c-48e1-bfda-466b3818ed61" containerID="a6a5144c8bf6ebe7111561582ed87111819f209ed3451b8464f722f9db2ae3c2" exitCode=2 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.600629 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerID="39fde520f058b73ce73c8fd11a8bfa24e055a38211b694c36194ba6867caba1b" exitCode=2 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.600787 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="galera" containerID="cri-o://2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924" gracePeriod=30 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.615698 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ca29960-de06-4140-aba1-6f9279722ffe" containerID="ebba2b48d81a716f8564ac05f4e67094d58682f34fbb3831a1e66df63c9f2817" exitCode=137 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.621695 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" containerID="cri-o://6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" gracePeriod=29 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.674744 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6547bffc85-6ngjc" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.147:8080/healthcheck\": dial tcp 10.217.0.147:8080: connect: connection refused" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.676539 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6547bffc85-6ngjc" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.147:8080/healthcheck\": dial tcp 10.217.0.147:8080: connect: connection refused" Nov 22 07:49:05 crc kubenswrapper[4858]: E1122 07:49:05.688117 4858 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 22 07:49:05 crc kubenswrapper[4858]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 07:49:05 crc kubenswrapper[4858]: + source /usr/local/bin/container-scripts/functions Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNBridge=br-int Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNRemote=tcp:localhost:6642 Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNEncapType=geneve Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNAvailabilityZones= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ EnableChassisAsGateway=true Nov 22 07:49:05 crc kubenswrapper[4858]: ++ PhysicalNetworks= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNHostName= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 07:49:05 crc kubenswrapper[4858]: ++ ovs_dir=/var/lib/openvswitch Nov 22 07:49:05 crc kubenswrapper[4858]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 07:49:05 crc kubenswrapper[4858]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 07:49:05 crc kubenswrapper[4858]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + sleep 0.5 Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + sleep 0.5 Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + cleanup_ovsdb_server_semaphore Nov 22 07:49:05 crc kubenswrapper[4858]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:49:05 crc kubenswrapper[4858]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 07:49:05 crc kubenswrapper[4858]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-xbvdl" message=< Nov 22 07:49:05 crc kubenswrapper[4858]: Exiting ovsdb-server (5) [ OK ] Nov 22 07:49:05 crc kubenswrapper[4858]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 07:49:05 crc kubenswrapper[4858]: + source /usr/local/bin/container-scripts/functions Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNBridge=br-int Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNRemote=tcp:localhost:6642 Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNEncapType=geneve Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNAvailabilityZones= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ EnableChassisAsGateway=true Nov 22 07:49:05 crc kubenswrapper[4858]: ++ PhysicalNetworks= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNHostName= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 07:49:05 crc kubenswrapper[4858]: ++ ovs_dir=/var/lib/openvswitch Nov 22 07:49:05 crc kubenswrapper[4858]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 07:49:05 crc kubenswrapper[4858]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 07:49:05 crc kubenswrapper[4858]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + sleep 0.5 Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + sleep 0.5 Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + cleanup_ovsdb_server_semaphore Nov 22 07:49:05 crc kubenswrapper[4858]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:49:05 crc kubenswrapper[4858]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 07:49:05 crc kubenswrapper[4858]: > Nov 22 07:49:05 crc kubenswrapper[4858]: E1122 07:49:05.688213 4858 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 22 07:49:05 crc kubenswrapper[4858]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 07:49:05 crc kubenswrapper[4858]: + source /usr/local/bin/container-scripts/functions Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNBridge=br-int Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNRemote=tcp:localhost:6642 Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNEncapType=geneve Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNAvailabilityZones= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ EnableChassisAsGateway=true Nov 22 07:49:05 crc kubenswrapper[4858]: ++ PhysicalNetworks= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ OVNHostName= Nov 22 07:49:05 crc kubenswrapper[4858]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 07:49:05 crc kubenswrapper[4858]: ++ ovs_dir=/var/lib/openvswitch Nov 22 07:49:05 crc kubenswrapper[4858]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 07:49:05 crc kubenswrapper[4858]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 07:49:05 crc kubenswrapper[4858]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + sleep 0.5 Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + sleep 0.5 Nov 22 07:49:05 crc kubenswrapper[4858]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:49:05 crc kubenswrapper[4858]: + cleanup_ovsdb_server_semaphore Nov 22 07:49:05 crc kubenswrapper[4858]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:49:05 crc kubenswrapper[4858]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 07:49:05 crc kubenswrapper[4858]: > pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" containerID="cri-o://ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.688279 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" containerID="cri-o://ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" gracePeriod=29 Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.896014 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12958341-df4b-4746-9621-04a44a4dafea" path="/var/lib/kubelet/pods/12958341-df4b-4746-9621-04a44a4dafea/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.897303 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557eea09-096b-40be-8182-638ffcaa230e" path="/var/lib/kubelet/pods/557eea09-096b-40be-8182-638ffcaa230e/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.897968 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854da42b-c1a7-4390-91cf-2fa7fa3e8eab" path="/var/lib/kubelet/pods/854da42b-c1a7-4390-91cf-2fa7fa3e8eab/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.898806 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890e1296-50c3-4f46-8359-08d3210fb46d" path="/var/lib/kubelet/pods/890e1296-50c3-4f46-8359-08d3210fb46d/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.900212 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e3f90c-3676-41ee-ab2d-f0dca9196a02" path="/var/lib/kubelet/pods/98e3f90c-3676-41ee-ab2d-f0dca9196a02/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.912991 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4885ab-de3a-4ccf-bfd4-a702a3b9d647" path="/var/lib/kubelet/pods/bb4885ab-de3a-4ccf-bfd4-a702a3b9d647/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.924224 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be60fe14-f226-4d4e-a855-47991607fd04" path="/var/lib/kubelet/pods/be60fe14-f226-4d4e-a855-47991607fd04/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.927516 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae6cc38-66f8-4e5a-9ce2-62a23de04553" path="/var/lib/kubelet/pods/cae6cc38-66f8-4e5a-9ce2-62a23de04553/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.928244 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d41812ee-66ac-438e-82b5-cb404aa95294" path="/var/lib/kubelet/pods/d41812ee-66ac-438e-82b5-cb404aa95294/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.929032 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5712f6e-4ef2-4de1-9093-5fa00d6a1d08" path="/var/lib/kubelet/pods/f5712f6e-4ef2-4de1-9093-5fa00d6a1d08/volumes" Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.939954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" event={"ID":"56ada962-6646-4da6-987d-6e9e277ee8b2","Type":"ContainerDied","Data":"d4513547d8a2ac717f8f1030a117cfe4b3acd8b155fe44df16b760b78b855132"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fpwcs" event={"ID":"56c36de6-d90c-48e1-bfda-466b3818ed61","Type":"ContainerDied","Data":"a6a5144c8bf6ebe7111561582ed87111819f209ed3451b8464f722f9db2ae3c2"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2","Type":"ContainerDied","Data":"39fde520f058b73ce73c8fd11a8bfa24e055a38211b694c36194ba6867caba1b"} Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940043 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940068 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bwlrp"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder1521-account-delete-m9vdj"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940110 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutroneea0-account-delete-4d76b"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940123 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance2343-account-delete-nsnxt"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940134 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement9450-account-delete-9jrdm"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940145 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican0446-account-delete-s8t8x"] Nov 22 07:49:05 crc kubenswrapper[4858]: I1122 07:49:05.940536 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d464fcfc-b91d-45e8-8c90-18083a632351" containerName="nova-cell1-conductor-conductor" containerID="cri-o://1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442" gracePeriod=30 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.031346 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.062658 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6/ovsdbserver-nb/0.log" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.071942 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.077886 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fpwcs_56c36de6-d90c-48e1-bfda-466b3818ed61/openstack-network-exporter/0.log" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.077973 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.081733 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovs-rundir\") pod \"56c36de6-d90c-48e1-bfda-466b3818ed61\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-nb\") pod \"56ada962-6646-4da6-987d-6e9e277ee8b2\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c36de6-d90c-48e1-bfda-466b3818ed61-config\") pod \"56c36de6-d90c-48e1-bfda-466b3818ed61\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config\") pod \"9ca29960-de06-4140-aba1-6f9279722ffe\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "56c36de6-d90c-48e1-bfda-466b3818ed61" (UID: "56c36de6-d90c-48e1-bfda-466b3818ed61"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-metrics-certs-tls-certs\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.106985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-metrics-certs-tls-certs\") pod \"56c36de6-d90c-48e1-bfda-466b3818ed61\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-svc\") pod \"56ada962-6646-4da6-987d-6e9e277ee8b2\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmwc\" (UniqueName: \"kubernetes.io/projected/56c36de6-d90c-48e1-bfda-466b3818ed61-kube-api-access-7nmwc\") pod \"56c36de6-d90c-48e1-bfda-466b3818ed61\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovn-rundir\") pod \"56c36de6-d90c-48e1-bfda-466b3818ed61\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107100 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-combined-ca-bundle\") pod \"56c36de6-d90c-48e1-bfda-466b3818ed61\" (UID: \"56c36de6-d90c-48e1-bfda-466b3818ed61\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107165 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-swift-storage-0\") pod \"56ada962-6646-4da6-987d-6e9e277ee8b2\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-combined-ca-bundle\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107280 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-scripts\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107347 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-combined-ca-bundle\") pod \"9ca29960-de06-4140-aba1-6f9279722ffe\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107374 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-sb\") pod \"56ada962-6646-4da6-987d-6e9e277ee8b2\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107406 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9crv\" (UniqueName: \"kubernetes.io/projected/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-kube-api-access-c9crv\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdbserver-nb-tls-certs\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-config\") pod \"56ada962-6646-4da6-987d-6e9e277ee8b2\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107495 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmgh\" (UniqueName: \"kubernetes.io/projected/56ada962-6646-4da6-987d-6e9e277ee8b2-kube-api-access-mnmgh\") pod \"56ada962-6646-4da6-987d-6e9e277ee8b2\" (UID: \"56ada962-6646-4da6-987d-6e9e277ee8b2\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107547 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-config\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107576 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config-secret\") pod \"9ca29960-de06-4140-aba1-6f9279722ffe\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvnq4\" (UniqueName: \"kubernetes.io/projected/9ca29960-de06-4140-aba1-6f9279722ffe-kube-api-access-pvnq4\") pod \"9ca29960-de06-4140-aba1-6f9279722ffe\" (UID: \"9ca29960-de06-4140-aba1-6f9279722ffe\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.107666 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdb-rundir\") pod \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\" (UID: \"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6\") " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.108564 4858 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.113779 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.123368 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "56c36de6-d90c-48e1-bfda-466b3818ed61" (UID: "56c36de6-d90c-48e1-bfda-466b3818ed61"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.128356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-config" (OuterVolumeSpecName: "config") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.142729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-scripts" (OuterVolumeSpecName: "scripts") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.154639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca29960-de06-4140-aba1-6f9279722ffe-kube-api-access-pvnq4" (OuterVolumeSpecName: "kube-api-access-pvnq4") pod "9ca29960-de06-4140-aba1-6f9279722ffe" (UID: "9ca29960-de06-4140-aba1-6f9279722ffe"). InnerVolumeSpecName "kube-api-access-pvnq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.163098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c36de6-d90c-48e1-bfda-466b3818ed61-config" (OuterVolumeSpecName: "config") pod "56c36de6-d90c-48e1-bfda-466b3818ed61" (UID: "56c36de6-d90c-48e1-bfda-466b3818ed61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.188630 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ada962-6646-4da6-987d-6e9e277ee8b2-kube-api-access-mnmgh" (OuterVolumeSpecName: "kube-api-access-mnmgh") pod "56ada962-6646-4da6-987d-6e9e277ee8b2" (UID: "56ada962-6646-4da6-987d-6e9e277ee8b2"). InnerVolumeSpecName "kube-api-access-mnmgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.192367 4858 scope.go:117] "RemoveContainer" containerID="e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.200783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c36de6-d90c-48e1-bfda-466b3818ed61-kube-api-access-7nmwc" (OuterVolumeSpecName: "kube-api-access-7nmwc") pod "56c36de6-d90c-48e1-bfda-466b3818ed61" (UID: "56c36de6-d90c-48e1-bfda-466b3818ed61"). InnerVolumeSpecName "kube-api-access-7nmwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.214419 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.214694 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvnq4\" (UniqueName: \"kubernetes.io/projected/9ca29960-de06-4140-aba1-6f9279722ffe-kube-api-access-pvnq4\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.214806 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.214896 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c36de6-d90c-48e1-bfda-466b3818ed61-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.215039 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nmwc\" (UniqueName: \"kubernetes.io/projected/56c36de6-d90c-48e1-bfda-466b3818ed61-kube-api-access-7nmwc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.215114 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56c36de6-d90c-48e1-bfda-466b3818ed61-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.215180 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.215247 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnmgh\" (UniqueName: \"kubernetes.io/projected/56ada962-6646-4da6-987d-6e9e277ee8b2-kube-api-access-mnmgh\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.277180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9ca29960-de06-4140-aba1-6f9279722ffe" (UID: "9ca29960-de06-4140-aba1-6f9279722ffe"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.287948 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56ada962-6646-4da6-987d-6e9e277ee8b2" (UID: "56ada962-6646-4da6-987d-6e9e277ee8b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.293366 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56ada962-6646-4da6-987d-6e9e277ee8b2" (UID: "56ada962-6646-4da6-987d-6e9e277ee8b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.297375 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.312426 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-kube-api-access-c9crv" (OuterVolumeSpecName: "kube-api-access-c9crv") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "kube-api-access-c9crv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.319959 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.320017 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9crv\" (UniqueName: \"kubernetes.io/projected/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-kube-api-access-c9crv\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.320031 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.320066 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.320082 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.320093 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.320244 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data podName:2a92d321-46e4-4291-8ac3-fc8f039b3dcf nodeName:}" failed. No retries permitted until 2025-11-22 07:49:10.320175689 +0000 UTC m=+2312.161598765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data") pod "rabbitmq-cell1-server-0" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.401114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56ada962-6646-4da6-987d-6e9e277ee8b2" (UID: "56ada962-6646-4da6-987d-6e9e277ee8b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.407511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.421877 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.421923 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.425443 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ca29960-de06-4140-aba1-6f9279722ffe" (UID: "9ca29960-de06-4140-aba1-6f9279722ffe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.426170 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.435540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56c36de6-d90c-48e1-bfda-466b3818ed61" (UID: "56c36de6-d90c-48e1-bfda-466b3818ed61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.524036 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.532006 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.532039 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.578072 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell04e0e-account-delete-lp8zr"] Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.580186 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapid4cc-account-delete-5tdjd"] Nov 22 07:49:06 crc kubenswrapper[4858]: W1122 07:49:06.608440 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31c63759_4028_4b22_acb3_c9c78f9cbfce.slice/crio-e16c0eee2d8825c755262a834a7a7f110d3510f14d8be286757e21e354c5d284 WatchSource:0}: Error finding container e16c0eee2d8825c755262a834a7a7f110d3510f14d8be286757e21e354c5d284: Status 404 returned error can't find the container with id e16c0eee2d8825c755262a834a7a7f110d3510f14d8be286757e21e354c5d284 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.617730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: W1122 07:49:06.651466 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod964bc658_f627_428c_9dbd_dd640e9394bc.slice/crio-7393f5006f9433c80b2ba89f4635b81bc2845f54c62887505a079ba1632313a4 WatchSource:0}: Error finding container 7393f5006f9433c80b2ba89f4635b81bc2845f54c62887505a079ba1632313a4: Status 404 returned error can't find the container with id 7393f5006f9433c80b2ba89f4635b81bc2845f54c62887505a079ba1632313a4 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.653919 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.670814 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56ada962-6646-4da6-987d-6e9e277ee8b2" (UID: "56ada962-6646-4da6-987d-6e9e277ee8b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.724272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroneea0-account-delete-4d76b" event={"ID":"57b11c1e-be66-4546-bf19-b2a71c05256c","Type":"ContainerStarted","Data":"0ffe925291d94a54dd7e74083ebb0526ce005621900a18e567256fec4d4052b8"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.732505 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" (UID: "14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.735863 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerID="a0adda39f79e6c29822139189a3c320fe6ee86b411f22c91f3e5eaceb048c381" exitCode=143 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.735976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4127577-b995-4dfb-95d8-e089acc50fc9","Type":"ContainerDied","Data":"a0adda39f79e6c29822139189a3c320fe6ee86b411f22c91f3e5eaceb048c381"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.741400 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9ca29960-de06-4140-aba1-6f9279722ffe" (UID: "9ca29960-de06-4140-aba1-6f9279722ffe"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.741938 4858 generic.go:334] "Generic (PLEG): container finished" podID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerID="a781dbc0e48e09fab41130a13b423bf6eab57d04da347dc3c059feb78f08659a" exitCode=143 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.741994 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d1176a9-f83c-4c6e-8436-60b9affe0857","Type":"ContainerDied","Data":"a781dbc0e48e09fab41130a13b423bf6eab57d04da347dc3c059feb78f08659a"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.754730 4858 generic.go:334] "Generic (PLEG): container finished" podID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerID="06711e654f6c8f43dfb70d0e3d0cf613ddc8ac0aa5d4281e2d0aea5c99c77349" exitCode=143 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.754821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" event={"ID":"04d1b1fd-682c-499c-8f5b-f22d4513217a","Type":"ContainerDied","Data":"06711e654f6c8f43dfb70d0e3d0cf613ddc8ac0aa5d4281e2d0aea5c99c77349"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.755797 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.755833 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.755846 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ca29960-de06-4140-aba1-6f9279722ffe-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.760540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-config" (OuterVolumeSpecName: "config") pod "56ada962-6646-4da6-987d-6e9e277ee8b2" (UID: "56ada962-6646-4da6-987d-6e9e277ee8b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.764602 4858 generic.go:334] "Generic (PLEG): container finished" podID="316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" containerID="6a71d1997501103b990d72ab680b1b604e4246555f70c6fd556826a4f81b697b" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.764719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d","Type":"ContainerDied","Data":"6a71d1997501103b990d72ab680b1b604e4246555f70c6fd556826a4f81b697b"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.766959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0446-account-delete-s8t8x" event={"ID":"aa36d9bc-2f0d-44bf-97d2-cc8785002875","Type":"ContainerStarted","Data":"d1463af7a9d9f15176dc741ac42085ab13c541ddbf2dfe74f5ec863be3efd4b7"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.768687 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fpwcs_56c36de6-d90c-48e1-bfda-466b3818ed61/openstack-network-exporter/0.log" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.768744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fpwcs" event={"ID":"56c36de6-d90c-48e1-bfda-466b3818ed61","Type":"ContainerDied","Data":"ca0bc89ac6cfbb7b139b036afb7baabc42ddbb83b2183f82f8736aa83e047d2c"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.768812 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fpwcs" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.781762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell04e0e-account-delete-lp8zr" event={"ID":"31c63759-4028-4b22-acb3-c9c78f9cbfce","Type":"ContainerStarted","Data":"e16c0eee2d8825c755262a834a7a7f110d3510f14d8be286757e21e354c5d284"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.788095 4858 generic.go:334] "Generic (PLEG): container finished" podID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerID="74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.788159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56cfd7c4f7-gvswl" event={"ID":"555cf9f2-a18e-4b84-b360-d03c7e0d0821","Type":"ContainerDied","Data":"74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.790469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1521-account-delete-m9vdj" event={"ID":"465c8e4d-cc9e-406b-8460-41e83f1dfadb","Type":"ContainerStarted","Data":"3f28982976a07e7584b2327e7e02f0e0e5ab58aad91bf0538b3d72b39a455f7a"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.793000 4858 generic.go:334] "Generic (PLEG): container finished" podID="d27a55dc-71d3-468f-b503-8436883c2771" containerID="3da885cb1a497446e4704b17b4b8aaf873885fce07483c60700f3f890b5ad6e2" exitCode=143 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.793055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-964b97968-m9n7r" event={"ID":"d27a55dc-71d3-468f-b503-8436883c2771","Type":"ContainerDied","Data":"3da885cb1a497446e4704b17b4b8aaf873885fce07483c60700f3f890b5ad6e2"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.802517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "56c36de6-d90c-48e1-bfda-466b3818ed61" (UID: "56c36de6-d90c-48e1-bfda-466b3818ed61"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811226 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811278 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811289 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811301 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811313 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811341 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811350 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.811524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.821103 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement9450-account-delete-9jrdm" event={"ID":"d8be274c-bb8a-43d2-8a56-dacb6789d343","Type":"ContainerStarted","Data":"f7609f681037292b39c17e6a3fa031165ec22db2347ff44274fe7c63fa910b32"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.829551 4858 generic.go:334] "Generic (PLEG): container finished" podID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerID="0fc2e8610b309ec2b9325b8a5fb9a64e0de3f594df62b7a0fe26ced79e91e89c" exitCode=143 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.842684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5744c7f6cf-flhrq" event={"ID":"eaa777a2-4dd0-407d-b615-34d7fcd0845b","Type":"ContainerDied","Data":"0fc2e8610b309ec2b9325b8a5fb9a64e0de3f594df62b7a0fe26ced79e91e89c"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.854011 4858 generic.go:334] "Generic (PLEG): container finished" podID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerID="df637c4bab3b1c089c9ad8726c02b0cd45f173fc27bc1d9048018902900124ab" exitCode=143 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.854095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f787bd646-rhtm4" event={"ID":"4d4e5cb5-ebc0-4cec-a53e-452efc26731b","Type":"ContainerDied","Data":"df637c4bab3b1c089c9ad8726c02b0cd45f173fc27bc1d9048018902900124ab"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.857260 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ada962-6646-4da6-987d-6e9e277ee8b2-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.857292 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56c36de6-d90c-48e1-bfda-466b3818ed61-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.857554 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.857650 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data podName:ddb1a203-c5d9-4ba5-b31b-c6134963af46 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:10.857625822 +0000 UTC m=+2312.699048838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data") pod "rabbitmq-server-0" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46") : configmap "rabbitmq-config-data" not found Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.860783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" event={"ID":"56ada962-6646-4da6-987d-6e9e277ee8b2","Type":"ContainerDied","Data":"cb8d8ac8061b42f488c86c16d0d86adb60ca6c2398fda29bfcf9285579ed00f7"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.860905 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-kvhvp" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.893235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2343-account-delete-nsnxt" event={"ID":"a04a3a5c-6169-4e97-a167-1c168a8d1690","Type":"ContainerStarted","Data":"f902da28db88b34859f3e22f25cb37ad9b95216bcb1b4204df6f1924113f5320"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.898977 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6/ovsdbserver-nb/0.log" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.900879 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6","Type":"ContainerDied","Data":"fe2580fb65f0703d75f266fef0d8976f9549d16faec0950678a5220120085269"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.901025 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.909582 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.926516 4858 generic.go:334] "Generic (PLEG): container finished" podID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.926624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerDied","Data":"ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581"} Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.926667 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.935590 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:49:06 crc kubenswrapper[4858]: E1122 07:49:06.935664 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="ovn-northd" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.946839 4858 generic.go:334] "Generic (PLEG): container finished" podID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerID="9a6d75795ae8232c4383f35b0f34c6eed669006f1c3713b8cda81bf151289e3c" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.946869 4858 generic.go:334] "Generic (PLEG): container finished" podID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerID="eb00d0789abf04eee5762b9ee56aabc63f0f1c94ae705447bb35180a0e8b87ca" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.947016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6547bffc85-6ngjc" event={"ID":"679c2346-5f5a-450e-b40d-1d371f1f8447","Type":"ContainerDied","Data":"9a6d75795ae8232c4383f35b0f34c6eed669006f1c3713b8cda81bf151289e3c"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.947048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6547bffc85-6ngjc" event={"ID":"679c2346-5f5a-450e-b40d-1d371f1f8447","Type":"ContainerDied","Data":"eb00d0789abf04eee5762b9ee56aabc63f0f1c94ae705447bb35180a0e8b87ca"} Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.980398 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.981531 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-kvhvp"] Nov 22 07:49:06 crc kubenswrapper[4858]: I1122 07:49:06.987784 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-kvhvp"] Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.000001 4858 scope.go:117] "RemoveContainer" containerID="0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a" Nov 22 07:49:07 crc kubenswrapper[4858]: E1122 07:49:07.001656 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a\": container with ID starting with 0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a not found: ID does not exist" containerID="0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.001695 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a"} err="failed to get container status \"0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a\": rpc error: code = NotFound desc = could not find container \"0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a\": container with ID starting with 0779812b25aad7479bed297ca131066188d55aabf044c954ba726fd437b8890a not found: ID does not exist" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.001721 4858 scope.go:117] "RemoveContainer" containerID="e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80" Nov 22 07:49:07 crc kubenswrapper[4858]: E1122 07:49:07.002170 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80\": container with ID starting with e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80 not found: ID does not exist" containerID="e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.002187 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80"} err="failed to get container status \"e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80\": rpc error: code = NotFound desc = could not find container \"e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80\": container with ID starting with e763cab54340dcda6425a3ae4b87eff2262396b0eec2ffe37340d723207d2b80 not found: ID does not exist" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.002200 4858 scope.go:117] "RemoveContainer" containerID="a6a5144c8bf6ebe7111561582ed87111819f209ed3451b8464f722f9db2ae3c2" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.002929 4858 generic.go:334] "Generic (PLEG): container finished" podID="9023aa66-975c-44c6-8aba-cff06211fd31" containerID="c5ed51b8583e97f2df4a7b4d36a5dee9f21c7fa973fc8d4bfdf95afaa4f89084" exitCode=143 Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.003023 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9023aa66-975c-44c6-8aba-cff06211fd31","Type":"ContainerDied","Data":"c5ed51b8583e97f2df4a7b4d36a5dee9f21c7fa973fc8d4bfdf95afaa4f89084"} Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.018727 4858 generic.go:334] "Generic (PLEG): container finished" podID="a10b7a00-765d-465e-b80e-e795da936e68" containerID="a264ec2d1761e844139d64f8cfd921295756c1e25bdc2ca727b8eecd6b023c10" exitCode=0 Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.018787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a10b7a00-765d-465e-b80e-e795da936e68","Type":"ContainerDied","Data":"a264ec2d1761e844139d64f8cfd921295756c1e25bdc2ca727b8eecd6b023c10"} Nov 22 07:49:07 crc kubenswrapper[4858]: E1122 07:49:07.025579 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:49:07 crc kubenswrapper[4858]: E1122 07:49:07.036459 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:49:07 crc kubenswrapper[4858]: E1122 07:49:07.066594 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:49:07 crc kubenswrapper[4858]: E1122 07:49:07.066850 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.077532 4858 scope.go:117] "RemoveContainer" containerID="d4513547d8a2ac717f8f1030a117cfe4b3acd8b155fe44df16b760b78b855132" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.296918 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.317975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.332891 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.365153 4858 scope.go:117] "RemoveContainer" containerID="59af042d3acb4898c5619edff95f385338616bbd0b3ce774e93c98a8b4ce51c3" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.372637 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.384809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d7bd\" (UniqueName: \"kubernetes.io/projected/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-kube-api-access-6d7bd\") pod \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.384912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-combined-ca-bundle\") pod \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.384952 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-nova-novncproxy-tls-certs\") pod \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.385016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-config-data\") pod \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.385070 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-vencrypt-tls-certs\") pod \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\" (UID: \"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.448644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-kube-api-access-6d7bd" (OuterVolumeSpecName: "kube-api-access-6d7bd") pod "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" (UID: "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d"). InnerVolumeSpecName "kube-api-access-6d7bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.488612 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d7bd\" (UniqueName: \"kubernetes.io/projected/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-kube-api-access-6d7bd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.522597 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" (UID: "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.528428 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-config-data" (OuterVolumeSpecName: "config-data") pod "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" (UID: "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.530084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" (UID: "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.534551 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" (UID: "316e9e3f-ff34-4e81-9e22-aa5aa167ad9d"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.555207 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" path="/var/lib/kubelet/pods/14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6/volumes" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.556713 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" path="/var/lib/kubelet/pods/56ada962-6646-4da6-987d-6e9e277ee8b2/volumes" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.557682 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca29960-de06-4140-aba1-6f9279722ffe" path="/var/lib/kubelet/pods/9ca29960-de06-4140-aba1-6f9279722ffe/volumes" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.559280 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b100894f-375d-4d4f-9bfa-7c87e4db058d" path="/var/lib/kubelet/pods/b100894f-375d-4d4f-9bfa-7c87e4db058d/volumes" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.591179 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.591228 4858 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.591242 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.591253 4858 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.673278 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.680979 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-fpwcs"] Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.693083 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-fpwcs"] Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.693688 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.719099 4858 scope.go:117] "RemoveContainer" containerID="b418a78ed1ffafc15b6ad4bd4c7badd60596b1b40cbf746a619168a2e1a176d2" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.755360 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.760149 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.782836 4858 scope.go:117] "RemoveContainer" containerID="3ad357e4d1993d5844d93e34adf01294d9318108dd28553ecd2102cef61ce78e" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.795621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-config-data\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.795671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-internal-tls-certs\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.795713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzkbt\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-kube-api-access-gzkbt\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.795799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-run-httpd\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.795961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-public-tls-certs\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.796019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-log-httpd\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.796056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-etc-swift\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.796096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-combined-ca-bundle\") pod \"679c2346-5f5a-450e-b40d-1d371f1f8447\" (UID: \"679c2346-5f5a-450e-b40d-1d371f1f8447\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.796756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.802392 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.804670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.818855 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-kube-api-access-gzkbt" (OuterVolumeSpecName: "kube-api-access-gzkbt") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "kube-api-access-gzkbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.830822 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.844693 4858 scope.go:117] "RemoveContainer" containerID="ebba2b48d81a716f8564ac05f4e67094d58682f34fbb3831a1e66df63c9f2817" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.849008 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": read tcp 10.217.0.2:43774->10.217.0.167:8776: read: connection reset by peer" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.910955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-galera-tls-certs\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-operator-scripts\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kolla-config\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-config-data\") pod \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912196 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sjdc\" (UniqueName: \"kubernetes.io/projected/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-kube-api-access-9sjdc\") pod \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-generated\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-combined-ca-bundle\") pod \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\" (UID: \"f89df03d-10c4-4a66-80dd-272c6ba5a2ae\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-combined-ca-bundle\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912515 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-default\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.912541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fjmm\" (UniqueName: \"kubernetes.io/projected/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kube-api-access-8fjmm\") pod \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\" (UID: \"d92662c9-980a-41b0-ad01-bbb1cdaf864b\") " Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.913866 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/679c2346-5f5a-450e-b40d-1d371f1f8447-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.913885 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.913897 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzkbt\" (UniqueName: \"kubernetes.io/projected/679c2346-5f5a-450e-b40d-1d371f1f8447-kube-api-access-gzkbt\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.915149 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.916153 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.917539 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.918642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.941115 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.954194 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kube-api-access-8fjmm" (OuterVolumeSpecName: "kube-api-access-8fjmm") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "kube-api-access-8fjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4858]: I1122 07:49:07.993309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-kube-api-access-9sjdc" (OuterVolumeSpecName: "kube-api-access-9sjdc") pod "f89df03d-10c4-4a66-80dd-272c6ba5a2ae" (UID: "f89df03d-10c4-4a66-80dd-272c6ba5a2ae"). InnerVolumeSpecName "kube-api-access-9sjdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015054 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-combined-ca-bundle\") pod \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015154 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhnww\" (UniqueName: \"kubernetes.io/projected/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-kube-api-access-jhnww\") pod \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-config-data\") pod \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\" (UID: \"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738\") " Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015854 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sjdc\" (UniqueName: \"kubernetes.io/projected/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-kube-api-access-9sjdc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015872 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015882 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015892 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fjmm\" (UniqueName: \"kubernetes.io/projected/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kube-api-access-8fjmm\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015900 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.015909 4858 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d92662c9-980a-41b0-ad01-bbb1cdaf864b-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.054152 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.058221 4858 generic.go:334] "Generic (PLEG): container finished" podID="f89df03d-10c4-4a66-80dd-272c6ba5a2ae" containerID="b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda" exitCode=0 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.058302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89df03d-10c4-4a66-80dd-272c6ba5a2ae","Type":"ContainerDied","Data":"b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.058382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89df03d-10c4-4a66-80dd-272c6ba5a2ae","Type":"ContainerDied","Data":"d0287bf6a7ddad5b16f4720fa3c919be54358cab3a20ab87ee95e1e35b20b883"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.058410 4858 scope.go:117] "RemoveContainer" containerID="b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.058544 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.066252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-kube-api-access-jhnww" (OuterVolumeSpecName: "kube-api-access-jhnww") pod "e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" (UID: "e66a0fa1-c84b-4b81-b5d9-3775d7dbd738"). InnerVolumeSpecName "kube-api-access-jhnww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.068027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6547bffc85-6ngjc" event={"ID":"679c2346-5f5a-450e-b40d-1d371f1f8447","Type":"ContainerDied","Data":"cb8e0b98ed5e42a2ea2c44a584ed1ed9bd02541400dbfafb35c9de9fb15addfb"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.068143 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6547bffc85-6ngjc" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.087146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0446-account-delete-s8t8x" event={"ID":"aa36d9bc-2f0d-44bf-97d2-cc8785002875","Type":"ContainerStarted","Data":"95736b04b771d7768eb3f3b40cbcad3bbfcc5992261841d7f094e34a12830692"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.117423 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhnww\" (UniqueName: \"kubernetes.io/projected/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-kube-api-access-jhnww\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.117475 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.123497 4858 scope.go:117] "RemoveContainer" containerID="b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda" Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.124584 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda\": container with ID starting with b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda not found: ID does not exist" containerID="b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.124631 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda"} err="failed to get container status \"b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda\": rpc error: code = NotFound desc = could not find container \"b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda\": container with ID starting with b4f958c1f95b77ab8a7a7d423596d3c71a622233f79c541e28356d676890efda not found: ID does not exist" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.124664 4858 scope.go:117] "RemoveContainer" containerID="9a6d75795ae8232c4383f35b0f34c6eed669006f1c3713b8cda81bf151289e3c" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.184092 4858 generic.go:334] "Generic (PLEG): container finished" podID="daa57087-ec21-4cff-aa47-68358e8f5039" containerID="98e56862b8436374df28d433ceba2eba7598bc78c7a8982ea0f1f152b99d551a" exitCode=0 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.184175 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"daa57087-ec21-4cff-aa47-68358e8f5039","Type":"ContainerDied","Data":"98e56862b8436374df28d433ceba2eba7598bc78c7a8982ea0f1f152b99d551a"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.212535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapid4cc-account-delete-5tdjd" event={"ID":"964bc658-f627-428c-9dbd-dd640e9394bc","Type":"ContainerStarted","Data":"7393f5006f9433c80b2ba89f4635b81bc2845f54c62887505a079ba1632313a4"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.237672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"316e9e3f-ff34-4e81-9e22-aa5aa167ad9d","Type":"ContainerDied","Data":"441f63fc8e2358b6d2aee749b466d904915b1ae56f86d2c07b2deb13ab980ee1"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.237817 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.282439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2343-account-delete-nsnxt" event={"ID":"a04a3a5c-6169-4e97-a167-1c168a8d1690","Type":"ContainerStarted","Data":"e3680cb319e6b254d9fb55c5079fa27ee9c17bc3d07f92905d53af9f7a03083e"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.318137 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican0446-account-delete-s8t8x" podStartSLOduration=5.318111274 podStartE2EDuration="5.318111274s" podCreationTimestamp="2025-11-22 07:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:08.114308157 +0000 UTC m=+2309.955731183" watchObservedRunningTime="2025-11-22 07:49:08.318111274 +0000 UTC m=+2310.159534280" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.323448 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance2343-account-delete-nsnxt" podStartSLOduration=5.323423715 podStartE2EDuration="5.323423715s" podCreationTimestamp="2025-11-22 07:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:08.313347281 +0000 UTC m=+2310.154770297" watchObservedRunningTime="2025-11-22 07:49:08.323423715 +0000 UTC m=+2310.164846721" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.324570 4858 generic.go:334] "Generic (PLEG): container finished" podID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerID="2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924" exitCode=0 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.324673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d92662c9-980a-41b0-ad01-bbb1cdaf864b","Type":"ContainerDied","Data":"2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.324708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d92662c9-980a-41b0-ad01-bbb1cdaf864b","Type":"ContainerDied","Data":"d46dbb82e39d7ea4d4c2a65b8c2afa9942910ab0e80bfa4c74ac1e9439907fba"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.324809 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.330775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1521-account-delete-m9vdj" event={"ID":"465c8e4d-cc9e-406b-8460-41e83f1dfadb","Type":"ContainerStarted","Data":"81422c98867143038ef4ffa6c2f72f05f237ab29f232ccca07fb76aa145ecc3f"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.337588 4858 generic.go:334] "Generic (PLEG): container finished" podID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" exitCode=0 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.339498 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.339549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738","Type":"ContainerDied","Data":"41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.342778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e66a0fa1-c84b-4b81-b5d9-3775d7dbd738","Type":"ContainerDied","Data":"82d230b26c73a572f6224eb5b8e5354607b94ca8a9099f0bea961c0506ee3291"} Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.355846 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder1521-account-delete-m9vdj" podStartSLOduration=6.355824658 podStartE2EDuration="6.355824658s" podCreationTimestamp="2025-11-22 07:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:08.353813782 +0000 UTC m=+2310.195236798" watchObservedRunningTime="2025-11-22 07:49:08.355824658 +0000 UTC m=+2310.197247654" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.362818 4858 generic.go:334] "Generic (PLEG): container finished" podID="a10b7a00-765d-465e-b80e-e795da936e68" containerID="9a6bd0f287f81f32a2ecd007606ab984ffaa840e52edd920e197cf1530362f85" exitCode=0 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.362975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a10b7a00-765d-465e-b80e-e795da936e68","Type":"ContainerDied","Data":"9a6bd0f287f81f32a2ecd007606ab984ffaa840e52edd920e197cf1530362f85"} Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.387412 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.388029 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.391851 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.391939 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.391617 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.419255 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.422879 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:08 crc kubenswrapper[4858]: E1122 07:49:08.422948 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.556628 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f89df03d-10c4-4a66-80dd-272c6ba5a2ae" (UID: "f89df03d-10c4-4a66-80dd-272c6ba5a2ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.651048 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" (UID: "e66a0fa1-c84b-4b81-b5d9-3775d7dbd738"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.654277 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.654362 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.658222 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-config-data" (OuterVolumeSpecName: "config-data") pod "f89df03d-10c4-4a66-80dd-272c6ba5a2ae" (UID: "f89df03d-10c4-4a66-80dd-272c6ba5a2ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.693119 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rm92c" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:49:08 crc kubenswrapper[4858]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Nov 22 07:49:08 crc kubenswrapper[4858]: > Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.708968 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.711742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.721556 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:39764->10.217.0.206:8775: read: connection reset by peer" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.721629 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:39766->10.217.0.206:8775: read: connection reset by peer" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.726075 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.761547 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89df03d-10c4-4a66-80dd-272c6ba5a2ae-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.765400 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.773694 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.773898 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.824271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d92662c9-980a-41b0-ad01-bbb1cdaf864b" (UID: "d92662c9-980a-41b0-ad01-bbb1cdaf864b"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.841172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.850022 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.881803 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-config-data" (OuterVolumeSpecName: "config-data") pod "e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" (UID: "e66a0fa1-c84b-4b81-b5d9-3775d7dbd738"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.897512 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-config-data" (OuterVolumeSpecName: "config-data") pod "679c2346-5f5a-450e-b40d-1d371f1f8447" (UID: "679c2346-5f5a-450e-b40d-1d371f1f8447"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.898437 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.898496 4858 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92662c9-980a-41b0-ad01-bbb1cdaf864b-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.898514 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.898528 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679c2346-5f5a-450e-b40d-1d371f1f8447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.898542 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.922814 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.923897 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-central-agent" containerID="cri-o://b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663" gracePeriod=30 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.926626 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="proxy-httpd" containerID="cri-o://36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64" gracePeriod=30 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.926817 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-notification-agent" containerID="cri-o://4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab" gracePeriod=30 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.927611 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="sg-core" containerID="cri-o://d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d" gracePeriod=30 Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.950068 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:49:08 crc kubenswrapper[4858]: I1122 07:49:08.950403 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="02115b03-d8fe-4334-96d6-cfbde07fd00a" containerName="kube-state-metrics" containerID="cri-o://d2fce1b7f44ee254502c1ee4737ddad02ab713e7ede13cb487c2720cd88d281e" gracePeriod=30 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.061503 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.061820 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" containerName="memcached" containerID="cri-o://f52562da73839518f25e57d06af939791fd8a1949a98847efb6f708599667a5d" gracePeriod=30 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.070695 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-964b97968-m9n7r" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.071462 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-964b97968-m9n7r" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.319716 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-sxstk"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.331044 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4pmzl"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.356299 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-sxstk"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.369004 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4pmzl"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.379478 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystonecf57-account-delete-khlk2"] Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380013 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerName="dnsmasq-dns" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380031 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerName="dnsmasq-dns" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380040 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89df03d-10c4-4a66-80dd-272c6ba5a2ae" containerName="nova-scheduler-scheduler" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380046 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89df03d-10c4-4a66-80dd-272c6ba5a2ae" containerName="nova-scheduler-scheduler" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380068 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380074 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380092 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="ovsdbserver-nb" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380097 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="ovsdbserver-nb" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380110 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerName="init" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380116 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerName="init" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380129 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380135 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380146 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="mysql-bootstrap" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380151 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="mysql-bootstrap" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380161 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="galera" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380167 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="galera" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380176 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="ovsdbserver-sb" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380182 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="ovsdbserver-sb" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380192 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380199 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380210 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-server" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380215 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-server" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380226 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-httpd" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380232 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-httpd" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380243 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380251 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.380262 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c36de6-d90c-48e1-bfda-466b3818ed61" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380268 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c36de6-d90c-48e1-bfda-466b3818ed61" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380464 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89df03d-10c4-4a66-80dd-272c6ba5a2ae" containerName="nova-scheduler-scheduler" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380480 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-httpd" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380493 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" containerName="galera" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380501 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380508 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ada962-6646-4da6-987d-6e9e277ee8b2" containerName="dnsmasq-dns" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380520 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" containerName="proxy-server" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380529 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380539 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380549 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="ovsdbserver-sb" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380560 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c36de6-d90c-48e1-bfda-466b3818ed61" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380570 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" containerName="openstack-network-exporter" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.380580 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fe3fb0-c1b4-4ca6-9d41-ca400a479fe6" containerName="ovsdbserver-nb" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.381392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.384998 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystonecf57-account-delete-khlk2"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.397435 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7b67c6cff8-nl4sb"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.397752 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7b67c6cff8-nl4sb" podUID="f4d4fda9-31aa-46b8-983a-ffa32db2516c" containerName="keystone-api" containerID="cri-o://4b2278b5a2b63a8809b3b18c14d3d73fbbf028ec81bae4f82dec2b606ada88b7" gracePeriod=30 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.426412 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.454423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.454494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpsrz\" (UniqueName: \"kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.464383 4858 generic.go:334] "Generic (PLEG): container finished" podID="d27a55dc-71d3-468f-b503-8436883c2771" containerID="7d85dd2bf391a295963c1c04a60ba1230b2aacca17a1680433770b7be5c7e8c8" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.464466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-964b97968-m9n7r" event={"ID":"d27a55dc-71d3-468f-b503-8436883c2771","Type":"ContainerDied","Data":"7d85dd2bf391a295963c1c04a60ba1230b2aacca17a1680433770b7be5c7e8c8"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.466471 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xgxrd"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.468295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"daa57087-ec21-4cff-aa47-68358e8f5039","Type":"ContainerDied","Data":"67fc3a439c377f091fb039fe5acb5199df498d36089cb786318b3f43f7b700b2"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.468338 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fc3a439c377f091fb039fe5acb5199df498d36089cb786318b3f43f7b700b2" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.471936 4858 generic.go:334] "Generic (PLEG): container finished" podID="9023aa66-975c-44c6-8aba-cff06211fd31" containerID="5dae7ef1cf0b3974032face8f70aee5fa5e4c4f2e7d4ca85f75144f7a600b8fc" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.472039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9023aa66-975c-44c6-8aba-cff06211fd31","Type":"ContainerDied","Data":"5dae7ef1cf0b3974032face8f70aee5fa5e4c4f2e7d4ca85f75144f7a600b8fc"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.481527 4858 generic.go:334] "Generic (PLEG): container finished" podID="465c8e4d-cc9e-406b-8460-41e83f1dfadb" containerID="81422c98867143038ef4ffa6c2f72f05f237ab29f232ccca07fb76aa145ecc3f" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.481614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1521-account-delete-m9vdj" event={"ID":"465c8e4d-cc9e-406b-8460-41e83f1dfadb","Type":"ContainerDied","Data":"81422c98867143038ef4ffa6c2f72f05f237ab29f232ccca07fb76aa145ecc3f"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.488779 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xgxrd"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.495720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroneea0-account-delete-4d76b" event={"ID":"57b11c1e-be66-4546-bf19-b2a71c05256c","Type":"ContainerStarted","Data":"c5f22872e946765c3b927d5609c7ae86097005d9299f538ec9bec6ac660eef39"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.500551 4858 generic.go:334] "Generic (PLEG): container finished" podID="d8be274c-bb8a-43d2-8a56-dacb6789d343" containerID="fba205941defd92fcd251d1c2b531399282991083a611c7e300acf8909c975a6" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.500661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement9450-account-delete-9jrdm" event={"ID":"d8be274c-bb8a-43d2-8a56-dacb6789d343","Type":"ContainerDied","Data":"fba205941defd92fcd251d1c2b531399282991083a611c7e300acf8909c975a6"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.504111 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerID="51604b1dd7eceb22876c5f2824f93728dd6ccb3368e18bfb5bdbfd78f9ae8589" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.504181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4127577-b995-4dfb-95d8-e089acc50fc9","Type":"ContainerDied","Data":"51604b1dd7eceb22876c5f2824f93728dd6ccb3368e18bfb5bdbfd78f9ae8589"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.520216 4858 generic.go:334] "Generic (PLEG): container finished" podID="964bc658-f627-428c-9dbd-dd640e9394bc" containerID="edf1bfbecea443bbc7e129c90a394a31314a390042585bb52ca713b177380f29" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.520372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapid4cc-account-delete-5tdjd" event={"ID":"964bc658-f627-428c-9dbd-dd640e9394bc","Type":"ContainerDied","Data":"edf1bfbecea443bbc7e129c90a394a31314a390042585bb52ca713b177380f29"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.538750 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/neutroneea0-account-delete-4d76b" secret="" err="secret \"galera-openstack-dockercfg-zfz58\" not found" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.551463 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442 is running failed: container process not found" containerID="1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.560762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.560861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsrz\" (UniqueName: \"kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.561049 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.561148 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:10.061103378 +0000 UTC m=+2311.902526384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : configmap "openstack-scripts" not found Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.614182 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442 is running failed: container process not found" containerID="1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.647553 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442 is running failed: container process not found" containerID="1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.647604 4858 projected.go:194] Error preparing data for projected volume kube-api-access-hpsrz for pod openstack/keystonecf57-account-delete-khlk2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.647687 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:10.147668053 +0000 UTC m=+2311.989091059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hpsrz" (UniqueName: "kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.647618 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d464fcfc-b91d-45e8-8c90-18083a632351" containerName="nova-cell1-conductor-conductor" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.653160 4858 generic.go:334] "Generic (PLEG): container finished" podID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerID="e3acbe684a3b1cf56d9ce339047e865b4bf5f7e2b06b06679ba47e5ef77b37e7" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.663736 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b2326f-6238-4686-8e42-5bd33c074357" path="/var/lib/kubelet/pods/43b2326f-6238-4686-8e42-5bd33c074357/volumes" Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.665361 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:09 crc kubenswrapper[4858]: E1122 07:49:09.665426 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts podName:57b11c1e-be66-4546-bf19-b2a71c05256c nodeName:}" failed. No retries permitted until 2025-11-22 07:49:10.165408424 +0000 UTC m=+2312.006831430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts") pod "neutroneea0-account-delete-4d76b" (UID: "57b11c1e-be66-4546-bf19-b2a71c05256c") : configmap "openstack-scripts" not found Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.677619 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c36de6-d90c-48e1-bfda-466b3818ed61" path="/var/lib/kubelet/pods/56c36de6-d90c-48e1-bfda-466b3818ed61/volumes" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.678835 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6475152-0db7-4069-a206-1b854a1529d1" path="/var/lib/kubelet/pods/b6475152-0db7-4069-a206-1b854a1529d1/volumes" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.679704 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd99df9d-2d5a-4997-b876-4573a931ee39" path="/var/lib/kubelet/pods/bd99df9d-2d5a-4997-b876-4573a931ee39/volumes" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.681258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f787bd646-rhtm4" event={"ID":"4d4e5cb5-ebc0-4cec-a53e-452efc26731b","Type":"ContainerDied","Data":"e3acbe684a3b1cf56d9ce339047e865b4bf5f7e2b06b06679ba47e5ef77b37e7"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.681302 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystonecf57-account-delete-khlk2"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.690749 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cf57-account-create-cz7dj"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.690800 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cf57-account-create-cz7dj"] Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.699406 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa36d9bc-2f0d-44bf-97d2-cc8785002875" containerID="95736b04b771d7768eb3f3b40cbcad3bbfcc5992261841d7f094e34a12830692" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.699523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0446-account-delete-s8t8x" event={"ID":"aa36d9bc-2f0d-44bf-97d2-cc8785002875","Type":"ContainerDied","Data":"95736b04b771d7768eb3f3b40cbcad3bbfcc5992261841d7f094e34a12830692"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.729927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a10b7a00-765d-465e-b80e-e795da936e68","Type":"ContainerDied","Data":"e15f9e87fe36673b614eb3863b21098a46ffc6d5d803ce00390940ef29cfa226"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.729996 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e15f9e87fe36673b614eb3863b21098a46ffc6d5d803ce00390940ef29cfa226" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.743032 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutroneea0-account-delete-4d76b" podStartSLOduration=6.74299706 podStartE2EDuration="6.74299706s" podCreationTimestamp="2025-11-22 07:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:09.53971587 +0000 UTC m=+2311.381138876" watchObservedRunningTime="2025-11-22 07:49:09.74299706 +0000 UTC m=+2311.584420086" Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.756741 4858 generic.go:334] "Generic (PLEG): container finished" podID="02115b03-d8fe-4334-96d6-cfbde07fd00a" containerID="d2fce1b7f44ee254502c1ee4737ddad02ab713e7ede13cb487c2720cd88d281e" exitCode=2 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.756821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02115b03-d8fe-4334-96d6-cfbde07fd00a","Type":"ContainerDied","Data":"d2fce1b7f44ee254502c1ee4737ddad02ab713e7ede13cb487c2720cd88d281e"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.767116 4858 generic.go:334] "Generic (PLEG): container finished" podID="31c63759-4028-4b22-acb3-c9c78f9cbfce" containerID="5e06a2e54f9ce93dc2ccfd9061b8c1c351688721b54e58b2245aca1b06036b6b" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.767241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell04e0e-account-delete-lp8zr" event={"ID":"31c63759-4028-4b22-acb3-c9c78f9cbfce","Type":"ContainerDied","Data":"5e06a2e54f9ce93dc2ccfd9061b8c1c351688721b54e58b2245aca1b06036b6b"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.772741 4858 generic.go:334] "Generic (PLEG): container finished" podID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerID="358f5eea1c33599a6ff9d0f49219f36c9849f142f1d83d32c74db35d272f5419" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.772833 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af987998-e4fb-4798-aaf5-6cb5f6a4670e","Type":"ContainerDied","Data":"358f5eea1c33599a6ff9d0f49219f36c9849f142f1d83d32c74db35d272f5419"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.774669 4858 generic.go:334] "Generic (PLEG): container finished" podID="a04a3a5c-6169-4e97-a167-1c168a8d1690" containerID="e3680cb319e6b254d9fb55c5079fa27ee9c17bc3d07f92905d53af9f7a03083e" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.774714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2343-account-delete-nsnxt" event={"ID":"a04a3a5c-6169-4e97-a167-1c168a8d1690","Type":"ContainerDied","Data":"e3680cb319e6b254d9fb55c5079fa27ee9c17bc3d07f92905d53af9f7a03083e"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.787175 4858 generic.go:334] "Generic (PLEG): container finished" podID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerID="ba14a6eadf4f6ecaaaac7e03e75a0670b78a68e6d491fb4484cc6fca27e15f36" exitCode=0 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.787278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d1176a9-f83c-4c6e-8436-60b9affe0857","Type":"ContainerDied","Data":"ba14a6eadf4f6ecaaaac7e03e75a0670b78a68e6d491fb4484cc6fca27e15f36"} Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.801066 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerID="d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d" exitCode=2 Nov 22 07:49:09 crc kubenswrapper[4858]: I1122 07:49:09.801143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerDied","Data":"d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.072195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.072680 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.072729 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:11.072714469 +0000 UTC m=+2312.914137465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : configmap "openstack-scripts" not found Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.174474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsrz\" (UniqueName: \"kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.174747 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.174888 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts podName:57b11c1e-be66-4546-bf19-b2a71c05256c nodeName:}" failed. No retries permitted until 2025-11-22 07:49:11.174862216 +0000 UTC m=+2313.016285292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts") pod "neutroneea0-account-delete-4d76b" (UID: "57b11c1e-be66-4546-bf19-b2a71c05256c") : configmap "openstack-scripts" not found Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.179255 4858 projected.go:194] Error preparing data for projected volume kube-api-access-hpsrz for pod openstack/keystonecf57-account-delete-khlk2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.179382 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:11.179356101 +0000 UTC m=+2313.020779177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpsrz" (UniqueName: "kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.552127 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.552461 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.564105 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.564168 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data podName:2a92d321-46e4-4291-8ac3-fc8f039b3dcf nodeName:}" failed. No retries permitted until 2025-11-22 07:49:18.564151962 +0000 UTC m=+2320.405574968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data") pod "rabbitmq-cell1-server-0" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.726658 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="galera" containerID="cri-o://a35f97adc654f8d53512934ced68b20cadeb39ebe2016eef17d8e1859247bf90" gracePeriod=29 Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.825839 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a4127577-b995-4dfb-95d8-e089acc50fc9","Type":"ContainerDied","Data":"daf9ab09ec867c074fe15fb94f7cd35fdee142beb125392596800f0345ce4901"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.825913 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf9ab09ec867c074fe15fb94f7cd35fdee142beb125392596800f0345ce4901" Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.832783 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerID="36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64" exitCode=0 Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.832840 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerID="b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663" exitCode=0 Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.832953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerDied","Data":"36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.832995 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerDied","Data":"b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.839777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f787bd646-rhtm4" event={"ID":"4d4e5cb5-ebc0-4cec-a53e-452efc26731b","Type":"ContainerDied","Data":"82a9380b3a5201558768985ea0218f597eb7595a493365cfc9d75e5ed84cb7c0"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.839839 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a9380b3a5201558768985ea0218f597eb7595a493365cfc9d75e5ed84cb7c0" Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.843614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-964b97968-m9n7r" event={"ID":"d27a55dc-71d3-468f-b503-8436883c2771","Type":"ContainerDied","Data":"e4462a426eb7ae015819af160f2e441ccb1c9c85055dd67bd973b741939f08f1"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.843649 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4462a426eb7ae015819af160f2e441ccb1c9c85055dd67bd973b741939f08f1" Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.845823 4858 generic.go:334] "Generic (PLEG): container finished" podID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" containerID="f52562da73839518f25e57d06af939791fd8a1949a98847efb6f708599667a5d" exitCode=0 Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.845888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9906e22d-4a3b-4ab7-86b7-2944b6af0f34","Type":"ContainerDied","Data":"f52562da73839518f25e57d06af939791fd8a1949a98847efb6f708599667a5d"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.847860 4858 generic.go:334] "Generic (PLEG): container finished" podID="d464fcfc-b91d-45e8-8c90-18083a632351" containerID="1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442" exitCode=0 Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.847918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d464fcfc-b91d-45e8-8c90-18083a632351","Type":"ContainerDied","Data":"1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.860411 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.862697 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af987998-e4fb-4798-aaf5-6cb5f6a4670e","Type":"ContainerDied","Data":"92f7f3b007329cff3b5db22cdc7e7400ca178fa3a31010f8bcba8b8406130863"} Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.862744 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f7f3b007329cff3b5db22cdc7e7400ca178fa3a31010f8bcba8b8406130863" Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.869563 4858 generic.go:334] "Generic (PLEG): container finished" podID="57b11c1e-be66-4546-bf19-b2a71c05256c" containerID="c5f22872e946765c3b927d5609c7ae86097005d9299f538ec9bec6ac660eef39" exitCode=0 Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.870569 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:49:10 crc kubenswrapper[4858]: E1122 07:49:10.870639 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data podName:ddb1a203-c5d9-4ba5-b31b-c6134963af46 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:18.870621262 +0000 UTC m=+2320.712044268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data") pod "rabbitmq-server-0" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46") : configmap "rabbitmq-config-data" not found Nov 22 07:49:10 crc kubenswrapper[4858]: I1122 07:49:10.871007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroneea0-account-delete-4d76b" event={"ID":"57b11c1e-be66-4546-bf19-b2a71c05256c","Type":"ContainerDied","Data":"c5f22872e946765c3b927d5609c7ae86097005d9299f538ec9bec6ac660eef39"} Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.074136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.074412 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.074820 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:13.074796552 +0000 UTC m=+2314.916219558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : configmap "openstack-scripts" not found Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.175698 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.176556 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts podName:57b11c1e-be66-4546-bf19-b2a71c05256c nodeName:}" failed. No retries permitted until 2025-11-22 07:49:13.176534356 +0000 UTC m=+2315.017957362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts") pod "neutroneea0-account-delete-4d76b" (UID: "57b11c1e-be66-4546-bf19-b2a71c05256c") : configmap "openstack-scripts" not found Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.279705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsrz\" (UniqueName: \"kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.284295 4858 projected.go:194] Error preparing data for projected volume kube-api-access-hpsrz for pod openstack/keystonecf57-account-delete-khlk2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.284386 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:13.284363605 +0000 UTC m=+2315.125786621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpsrz" (UniqueName: "kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.551230 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.570607 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="725a427f-782d-4d51-95f9-24ff18fe1591" path="/var/lib/kubelet/pods/725a427f-782d-4d51-95f9-24ff18fe1591/volumes" Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.924804 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.927714 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.929478 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:49:11 crc kubenswrapper[4858]: E1122 07:49:11.929536 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="ovn-northd" Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.934733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d1176a9-f83c-4c6e-8436-60b9affe0857","Type":"ContainerDied","Data":"9e3ea62f498db52ef54c3c4128291426160d3444c41617d48cfac1068fd67616"} Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.934800 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3ea62f498db52ef54c3c4128291426160d3444c41617d48cfac1068fd67616" Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.944163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9023aa66-975c-44c6-8aba-cff06211fd31","Type":"ContainerDied","Data":"46caad0f5e2f65b6abde3c2a0175fe0534b88fbe8205718d3265ca906ca0dd50"} Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.944240 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46caad0f5e2f65b6abde3c2a0175fe0534b88fbe8205718d3265ca906ca0dd50" Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.968237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9906e22d-4a3b-4ab7-86b7-2944b6af0f34","Type":"ContainerDied","Data":"048787de93f736495867c46134edc0e72cf0883b9f58fcdbe2d0237088e3b6e4"} Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.968302 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048787de93f736495867c46134edc0e72cf0883b9f58fcdbe2d0237088e3b6e4" Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.977531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0446-account-delete-s8t8x" event={"ID":"aa36d9bc-2f0d-44bf-97d2-cc8785002875","Type":"ContainerDied","Data":"d1463af7a9d9f15176dc741ac42085ab13c541ddbf2dfe74f5ec863be3efd4b7"} Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.977641 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1463af7a9d9f15176dc741ac42085ab13c541ddbf2dfe74f5ec863be3efd4b7" Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.987879 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerID="fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff" exitCode=0 Nov 22 07:49:11 crc kubenswrapper[4858]: I1122 07:49:11.988014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a92d321-46e4-4291-8ac3-fc8f039b3dcf","Type":"ContainerDied","Data":"fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.005677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2343-account-delete-nsnxt" event={"ID":"a04a3a5c-6169-4e97-a167-1c168a8d1690","Type":"ContainerDied","Data":"f902da28db88b34859f3e22f25cb37ad9b95216bcb1b4204df6f1924113f5320"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.005762 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f902da28db88b34859f3e22f25cb37ad9b95216bcb1b4204df6f1924113f5320" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.012649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell04e0e-account-delete-lp8zr" event={"ID":"31c63759-4028-4b22-acb3-c9c78f9cbfce","Type":"ContainerDied","Data":"e16c0eee2d8825c755262a834a7a7f110d3510f14d8be286757e21e354c5d284"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.012697 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e16c0eee2d8825c755262a834a7a7f110d3510f14d8be286757e21e354c5d284" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.017653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1521-account-delete-m9vdj" event={"ID":"465c8e4d-cc9e-406b-8460-41e83f1dfadb","Type":"ContainerDied","Data":"3f28982976a07e7584b2327e7e02f0e0e5ab58aad91bf0538b3d72b39a455f7a"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.017705 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f28982976a07e7584b2327e7e02f0e0e5ab58aad91bf0538b3d72b39a455f7a" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.021273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d464fcfc-b91d-45e8-8c90-18083a632351","Type":"ContainerDied","Data":"133fcd125d630e7c648acaa07e8b9906c1b90f0a9eed2f4758a1a964a962ce3a"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.021383 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133fcd125d630e7c648acaa07e8b9906c1b90f0a9eed2f4758a1a964a962ce3a" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.023370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02115b03-d8fe-4334-96d6-cfbde07fd00a","Type":"ContainerDied","Data":"b241bc88d77adca468e24f40e0373b856a4cbb502bda2b51bff1013ecb31da62"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.023409 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b241bc88d77adca468e24f40e0373b856a4cbb502bda2b51bff1013ecb31da62" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.025340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapid4cc-account-delete-5tdjd" event={"ID":"964bc658-f627-428c-9dbd-dd640e9394bc","Type":"ContainerDied","Data":"7393f5006f9433c80b2ba89f4635b81bc2845f54c62887505a079ba1632313a4"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.025423 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7393f5006f9433c80b2ba89f4635b81bc2845f54c62887505a079ba1632313a4" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.027247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement9450-account-delete-9jrdm" event={"ID":"d8be274c-bb8a-43d2-8a56-dacb6789d343","Type":"ContainerDied","Data":"f7609f681037292b39c17e6a3fa031165ec22db2347ff44274fe7c63fa910b32"} Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.027277 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7609f681037292b39c17e6a3fa031165ec22db2347ff44274fe7c63fa910b32" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.124229 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.154915 4858 scope.go:117] "RemoveContainer" containerID="eb00d0789abf04eee5762b9ee56aabc63f0f1c94ae705447bb35180a0e8b87ca" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.156565 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.189169 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:49:12 crc kubenswrapper[4858]: E1122 07:49:12.230265 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hpsrz operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystonecf57-account-delete-khlk2" podUID="c4d53767-86e9-4e1c-930d-0d92af7e62e0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.243225 4858 scope.go:117] "RemoveContainer" containerID="6a71d1997501103b990d72ab680b1b604e4246555f70c6fd556826a4f81b697b" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.259824 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.278898 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.292496 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306457 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-config-data\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-combined-ca-bundle\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/daa57087-ec21-4cff-aa47-68358e8f5039-kube-api-access-9fz5f\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-internal-tls-certs\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306659 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-scripts\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-internal-tls-certs\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-combined-ca-bundle\") pod \"a10b7a00-765d-465e-b80e-e795da936e68\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-public-tls-certs\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306765 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skbzc\" (UniqueName: \"kubernetes.io/projected/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-kube-api-access-skbzc\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daa57087-ec21-4cff-aa47-68358e8f5039-etc-machine-id\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-public-tls-certs\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306876 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vf6h\" (UniqueName: \"kubernetes.io/projected/a10b7a00-765d-465e-b80e-e795da936e68-kube-api-access-5vf6h\") pod \"a10b7a00-765d-465e-b80e-e795da936e68\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data\") pod \"a10b7a00-765d-465e-b80e-e795da936e68\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.306988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data-custom\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307017 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daa57087-ec21-4cff-aa47-68358e8f5039-logs\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-scripts\") pod \"daa57087-ec21-4cff-aa47-68358e8f5039\" (UID: \"daa57087-ec21-4cff-aa47-68358e8f5039\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-scripts\") pod \"a10b7a00-765d-465e-b80e-e795da936e68\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-logs\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data-custom\") pod \"a10b7a00-765d-465e-b80e-e795da936e68\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307151 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a10b7a00-765d-465e-b80e-e795da936e68-etc-machine-id\") pod \"a10b7a00-765d-465e-b80e-e795da936e68\" (UID: \"a10b7a00-765d-465e-b80e-e795da936e68\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.307174 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-combined-ca-bundle\") pod \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\" (UID: \"4d4e5cb5-ebc0-4cec-a53e-452efc26731b\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.308846 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.310186 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daa57087-ec21-4cff-aa47-68358e8f5039-logs" (OuterVolumeSpecName: "logs") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.310265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-logs" (OuterVolumeSpecName: "logs") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.313909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a10b7a00-765d-465e-b80e-e795da936e68-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a10b7a00-765d-465e-b80e-e795da936e68" (UID: "a10b7a00-765d-465e-b80e-e795da936e68"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.315011 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.318832 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.320140 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a10b7a00-765d-465e-b80e-e795da936e68" (UID: "a10b7a00-765d-465e-b80e-e795da936e68"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.327765 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.330518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/daa57087-ec21-4cff-aa47-68358e8f5039-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.331804 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6547bffc85-6ngjc"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.335554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-scripts" (OuterVolumeSpecName: "scripts") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.338195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10b7a00-765d-465e-b80e-e795da936e68-kube-api-access-5vf6h" (OuterVolumeSpecName: "kube-api-access-5vf6h") pod "a10b7a00-765d-465e-b80e-e795da936e68" (UID: "a10b7a00-765d-465e-b80e-e795da936e68"). InnerVolumeSpecName "kube-api-access-5vf6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.338407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-scripts" (OuterVolumeSpecName: "scripts") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.348216 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6547bffc85-6ngjc"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.351678 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-scripts" (OuterVolumeSpecName: "scripts") pod "a10b7a00-765d-465e-b80e-e795da936e68" (UID: "a10b7a00-765d-465e-b80e-e795da936e68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.352844 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-kube-api-access-skbzc" (OuterVolumeSpecName: "kube-api-access-skbzc") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "kube-api-access-skbzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.359660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa57087-ec21-4cff-aa47-68358e8f5039-kube-api-access-9fz5f" (OuterVolumeSpecName: "kube-api-access-9fz5f") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "kube-api-access-9fz5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.370034 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.384439 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.399727 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.399864 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.409606 4858 scope.go:117] "RemoveContainer" containerID="2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.410555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-logs\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.410629 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.410678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-scripts\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.410719 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-internal-tls-certs\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.410794 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-httpd-run\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412357 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wg45\" (UniqueName: \"kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-config-data\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412404 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-combined-ca-bundle\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412950 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fz5f\" (UniqueName: \"kubernetes.io/projected/daa57087-ec21-4cff-aa47-68358e8f5039-kube-api-access-9fz5f\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412966 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412975 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skbzc\" (UniqueName: \"kubernetes.io/projected/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-kube-api-access-skbzc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412984 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/daa57087-ec21-4cff-aa47-68358e8f5039-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.412992 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vf6h\" (UniqueName: \"kubernetes.io/projected/a10b7a00-765d-465e-b80e-e795da936e68-kube-api-access-5vf6h\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413001 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413009 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daa57087-ec21-4cff-aa47-68358e8f5039-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413019 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413028 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413036 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413044 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.413054 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a10b7a00-765d-465e-b80e-e795da936e68-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.416943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-logs" (OuterVolumeSpecName: "logs") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.418576 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.426931 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.426928 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-scripts" (OuterVolumeSpecName: "scripts") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.434002 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.437772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data" (OuterVolumeSpecName: "config-data") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.488098 4858 scope.go:117] "RemoveContainer" containerID="162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.501826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.507757 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.512891 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-config-data" (OuterVolumeSpecName: "config-data") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514220 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45" (OuterVolumeSpecName: "kube-api-access-8wg45") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "kube-api-access-8wg45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-combined-ca-bundle\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514696 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6m8n\" (UniqueName: \"kubernetes.io/projected/a4127577-b995-4dfb-95d8-e089acc50fc9-kube-api-access-f6m8n\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514736 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-public-tls-certs\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-httpd-run\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptqnm\" (UniqueName: \"kubernetes.io/projected/3d1176a9-f83c-4c6e-8436-60b9affe0857-kube-api-access-ptqnm\") pod \"3d1176a9-f83c-4c6e-8436-60b9affe0857\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-combined-ca-bundle\") pod \"3d1176a9-f83c-4c6e-8436-60b9affe0857\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-public-tls-certs\") pod \"3d1176a9-f83c-4c6e-8436-60b9affe0857\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1176a9-f83c-4c6e-8436-60b9affe0857-logs\") pod \"3d1176a9-f83c-4c6e-8436-60b9affe0857\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.514961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wg45\" (UniqueName: \"kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45\") pod \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\" (UID: \"af987998-e4fb-4798-aaf5-6cb5f6a4670e\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-public-tls-certs\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515062 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-logs\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-config-data\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnlr\" (UniqueName: \"kubernetes.io/projected/d27a55dc-71d3-468f-b503-8436883c2771-kube-api-access-zqnlr\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a55dc-71d3-468f-b503-8436883c2771-logs\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515313 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data-custom\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.515395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-config-data\") pod \"3d1176a9-f83c-4c6e-8436-60b9affe0857\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.516190 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d1176a9-f83c-4c6e-8436-60b9affe0857-logs" (OuterVolumeSpecName: "logs") pod "3d1176a9-f83c-4c6e-8436-60b9affe0857" (UID: "3d1176a9-f83c-4c6e-8436-60b9affe0857"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.516568 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.516741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-internal-tls-certs\") pod \"3d1176a9-f83c-4c6e-8436-60b9affe0857\" (UID: \"3d1176a9-f83c-4c6e-8436-60b9affe0857\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.516854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-scripts\") pod \"a4127577-b995-4dfb-95d8-e089acc50fc9\" (UID: \"a4127577-b995-4dfb-95d8-e089acc50fc9\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.516984 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-combined-ca-bundle\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.517055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-internal-tls-certs\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.517104 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data\") pod \"d27a55dc-71d3-468f-b503-8436883c2771\" (UID: \"d27a55dc-71d3-468f-b503-8436883c2771\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518647 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518683 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518697 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d1176a9-f83c-4c6e-8436-60b9affe0857-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518713 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af987998-e4fb-4798-aaf5-6cb5f6a4670e-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518728 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518760 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518777 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518789 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.516730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.518073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-logs" (OuterVolumeSpecName: "logs") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: W1122 07:49:12.518584 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/af987998-e4fb-4798-aaf5-6cb5f6a4670e/volumes/kubernetes.io~projected/kube-api-access-8wg45 Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.521430 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45" (OuterVolumeSpecName: "kube-api-access-8wg45") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "kube-api-access-8wg45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.522216 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27a55dc-71d3-468f-b503-8436883c2771-logs" (OuterVolumeSpecName: "logs") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.530887 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.560017 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.567175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27a55dc-71d3-468f-b503-8436883c2771-kube-api-access-zqnlr" (OuterVolumeSpecName: "kube-api-access-zqnlr") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "kube-api-access-zqnlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.580734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1176a9-f83c-4c6e-8436-60b9affe0857-kube-api-access-ptqnm" (OuterVolumeSpecName: "kube-api-access-ptqnm") pod "3d1176a9-f83c-4c6e-8436-60b9affe0857" (UID: "3d1176a9-f83c-4c6e-8436-60b9affe0857"). InnerVolumeSpecName "kube-api-access-ptqnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.582342 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.584476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.584558 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4127577-b995-4dfb-95d8-e089acc50fc9-kube-api-access-f6m8n" (OuterVolumeSpecName: "kube-api-access-f6m8n") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "kube-api-access-f6m8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.613576 4858 scope.go:117] "RemoveContainer" containerID="2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924" Nov 22 07:49:12 crc kubenswrapper[4858]: E1122 07:49:12.614864 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924\": container with ID starting with 2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924 not found: ID does not exist" containerID="2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.614943 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924"} err="failed to get container status \"2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924\": rpc error: code = NotFound desc = could not find container \"2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924\": container with ID starting with 2a3b995f77b64f08060c3f1e5c36bb9a048baf4b470b3036bf4e66e5c61cb924 not found: ID does not exist" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.614995 4858 scope.go:117] "RemoveContainer" containerID="162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550" Nov 22 07:49:12 crc kubenswrapper[4858]: E1122 07:49:12.618572 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550\": container with ID starting with 162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550 not found: ID does not exist" containerID="162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.618626 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550"} err="failed to get container status \"162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550\": rpc error: code = NotFound desc = could not find container \"162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550\": container with ID starting with 162320c07577da830f379f1d10c16ea33e143425c19d1458da6278acc52b1550 not found: ID does not exist" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.618657 4858 scope.go:117] "RemoveContainer" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-combined-ca-bundle\") pod \"02115b03-d8fe-4334-96d6-cfbde07fd00a\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-config-data\") pod \"d464fcfc-b91d-45e8-8c90-18083a632351\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-combined-ca-bundle\") pod \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620871 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-nova-metadata-tls-certs\") pod \"9023aa66-975c-44c6-8aba-cff06211fd31\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620898 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kolla-config\") pod \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620956 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-certs\") pod \"02115b03-d8fe-4334-96d6-cfbde07fd00a\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.620980 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97hg\" (UniqueName: \"kubernetes.io/projected/d464fcfc-b91d-45e8-8c90-18083a632351-kube-api-access-t97hg\") pod \"d464fcfc-b91d-45e8-8c90-18083a632351\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-config\") pod \"02115b03-d8fe-4334-96d6-cfbde07fd00a\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964bc658-f627-428c-9dbd-dd640e9394bc-operator-scripts\") pod \"964bc658-f627-428c-9dbd-dd640e9394bc\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-config-data\") pod \"9023aa66-975c-44c6-8aba-cff06211fd31\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621145 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbnv7\" (UniqueName: \"kubernetes.io/projected/964bc658-f627-428c-9dbd-dd640e9394bc-kube-api-access-sbnv7\") pod \"964bc658-f627-428c-9dbd-dd640e9394bc\" (UID: \"964bc658-f627-428c-9dbd-dd640e9394bc\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621218 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6p6v\" (UniqueName: \"kubernetes.io/projected/9023aa66-975c-44c6-8aba-cff06211fd31-kube-api-access-h6p6v\") pod \"9023aa66-975c-44c6-8aba-cff06211fd31\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621240 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg75d\" (UniqueName: \"kubernetes.io/projected/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kube-api-access-rg75d\") pod \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9023aa66-975c-44c6-8aba-cff06211fd31-logs\") pod \"9023aa66-975c-44c6-8aba-cff06211fd31\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-config-data\") pod \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621298 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-combined-ca-bundle\") pod \"9023aa66-975c-44c6-8aba-cff06211fd31\" (UID: \"9023aa66-975c-44c6-8aba-cff06211fd31\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621380 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-combined-ca-bundle\") pod \"d464fcfc-b91d-45e8-8c90-18083a632351\" (UID: \"d464fcfc-b91d-45e8-8c90-18083a632351\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl5tq\" (UniqueName: \"kubernetes.io/projected/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-api-access-rl5tq\") pod \"02115b03-d8fe-4334-96d6-cfbde07fd00a\" (UID: \"02115b03-d8fe-4334-96d6-cfbde07fd00a\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-memcached-tls-certs\") pod \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\" (UID: \"9906e22d-4a3b-4ab7-86b7-2944b6af0f34\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.621981 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqnlr\" (UniqueName: \"kubernetes.io/projected/d27a55dc-71d3-468f-b503-8436883c2771-kube-api-access-zqnlr\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622003 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a55dc-71d3-468f-b503-8436883c2771-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622019 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622034 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6m8n\" (UniqueName: \"kubernetes.io/projected/a4127577-b995-4dfb-95d8-e089acc50fc9-kube-api-access-f6m8n\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622049 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622062 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptqnm\" (UniqueName: \"kubernetes.io/projected/3d1176a9-f83c-4c6e-8436-60b9affe0857-kube-api-access-ptqnm\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622070 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wg45\" (UniqueName: \"kubernetes.io/projected/af987998-e4fb-4798-aaf5-6cb5f6a4670e-kube-api-access-8wg45\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622096 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.622108 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4127577-b995-4dfb-95d8-e089acc50fc9-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.623300 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9906e22d-4a3b-4ab7-86b7-2944b6af0f34" (UID: "9906e22d-4a3b-4ab7-86b7-2944b6af0f34"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.625388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9023aa66-975c-44c6-8aba-cff06211fd31-logs" (OuterVolumeSpecName: "logs") pod "9023aa66-975c-44c6-8aba-cff06211fd31" (UID: "9023aa66-975c-44c6-8aba-cff06211fd31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.625673 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.637378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9023aa66-975c-44c6-8aba-cff06211fd31-kube-api-access-h6p6v" (OuterVolumeSpecName: "kube-api-access-h6p6v") pod "9023aa66-975c-44c6-8aba-cff06211fd31" (UID: "9023aa66-975c-44c6-8aba-cff06211fd31"). InnerVolumeSpecName "kube-api-access-h6p6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.644618 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-config-data" (OuterVolumeSpecName: "config-data") pod "9906e22d-4a3b-4ab7-86b7-2944b6af0f34" (UID: "9906e22d-4a3b-4ab7-86b7-2944b6af0f34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.647147 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964bc658-f627-428c-9dbd-dd640e9394bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "964bc658-f627-428c-9dbd-dd640e9394bc" (UID: "964bc658-f627-428c-9dbd-dd640e9394bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.657064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-api-access-rl5tq" (OuterVolumeSpecName: "kube-api-access-rl5tq") pod "02115b03-d8fe-4334-96d6-cfbde07fd00a" (UID: "02115b03-d8fe-4334-96d6-cfbde07fd00a"). InnerVolumeSpecName "kube-api-access-rl5tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.677270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-scripts" (OuterVolumeSpecName: "scripts") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.688884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964bc658-f627-428c-9dbd-dd640e9394bc-kube-api-access-sbnv7" (OuterVolumeSpecName: "kube-api-access-sbnv7") pod "964bc658-f627-428c-9dbd-dd640e9394bc" (UID: "964bc658-f627-428c-9dbd-dd640e9394bc"). InnerVolumeSpecName "kube-api-access-sbnv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.689393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d464fcfc-b91d-45e8-8c90-18083a632351-kube-api-access-t97hg" (OuterVolumeSpecName: "kube-api-access-t97hg") pod "d464fcfc-b91d-45e8-8c90-18083a632351" (UID: "d464fcfc-b91d-45e8-8c90-18083a632351"). InnerVolumeSpecName "kube-api-access-t97hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.689645 4858 scope.go:117] "RemoveContainer" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.689924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kube-api-access-rg75d" (OuterVolumeSpecName: "kube-api-access-rg75d") pod "9906e22d-4a3b-4ab7-86b7-2944b6af0f34" (UID: "9906e22d-4a3b-4ab7-86b7-2944b6af0f34"). InnerVolumeSpecName "kube-api-access-rg75d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: E1122 07:49:12.690178 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792\": container with ID starting with 41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792 not found: ID does not exist" containerID="41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.690223 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792"} err="failed to get container status \"41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792\": rpc error: code = NotFound desc = could not find container \"41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792\": container with ID starting with 41a2fc93dc3cd9dad36ff17b89b8ae131e66c16f225630acca97287b53e66792 not found: ID does not exist" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.718113 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.726945 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa36d9bc-2f0d-44bf-97d2-cc8785002875-operator-scripts\") pod \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.727286 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlfsz\" (UniqueName: \"kubernetes.io/projected/aa36d9bc-2f0d-44bf-97d2-cc8785002875-kube-api-access-nlfsz\") pod \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\" (UID: \"aa36d9bc-2f0d-44bf-97d2-cc8785002875\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728764 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728808 4858 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728837 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t97hg\" (UniqueName: \"kubernetes.io/projected/d464fcfc-b91d-45e8-8c90-18083a632351-kube-api-access-t97hg\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728853 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964bc658-f627-428c-9dbd-dd640e9394bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728877 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbnv7\" (UniqueName: \"kubernetes.io/projected/964bc658-f627-428c-9dbd-dd640e9394bc-kube-api-access-sbnv7\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728893 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6p6v\" (UniqueName: \"kubernetes.io/projected/9023aa66-975c-44c6-8aba-cff06211fd31-kube-api-access-h6p6v\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728908 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9023aa66-975c-44c6-8aba-cff06211fd31-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728921 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg75d\" (UniqueName: \"kubernetes.io/projected/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-kube-api-access-rg75d\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728932 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.728945 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl5tq\" (UniqueName: \"kubernetes.io/projected/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-api-access-rl5tq\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.730744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa36d9bc-2f0d-44bf-97d2-cc8785002875-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa36d9bc-2f0d-44bf-97d2-cc8785002875" (UID: "aa36d9bc-2f0d-44bf-97d2-cc8785002875"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.758871 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa36d9bc-2f0d-44bf-97d2-cc8785002875-kube-api-access-nlfsz" (OuterVolumeSpecName: "kube-api-access-nlfsz") pod "aa36d9bc-2f0d-44bf-97d2-cc8785002875" (UID: "aa36d9bc-2f0d-44bf-97d2-cc8785002875"). InnerVolumeSpecName "kube-api-access-nlfsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.759135 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a10b7a00-765d-465e-b80e-e795da936e68" (UID: "a10b7a00-765d-465e-b80e-e795da936e68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.830872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq9g\" (UniqueName: \"kubernetes.io/projected/a04a3a5c-6169-4e97-a167-1c168a8d1690-kube-api-access-7rq9g\") pod \"a04a3a5c-6169-4e97-a167-1c168a8d1690\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.831278 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04a3a5c-6169-4e97-a167-1c168a8d1690-operator-scripts\") pod \"a04a3a5c-6169-4e97-a167-1c168a8d1690\" (UID: \"a04a3a5c-6169-4e97-a167-1c168a8d1690\") " Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.835828 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04a3a5c-6169-4e97-a167-1c168a8d1690-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a04a3a5c-6169-4e97-a167-1c168a8d1690" (UID: "a04a3a5c-6169-4e97-a167-1c168a8d1690"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.840614 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.840659 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa36d9bc-2f0d-44bf-97d2-cc8785002875-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.840678 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04a3a5c-6169-4e97-a167-1c168a8d1690-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.840689 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlfsz\" (UniqueName: \"kubernetes.io/projected/aa36d9bc-2f0d-44bf-97d2-cc8785002875-kube-api-access-nlfsz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.930514 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04a3a5c-6169-4e97-a167-1c168a8d1690-kube-api-access-7rq9g" (OuterVolumeSpecName: "kube-api-access-7rq9g") pod "a04a3a5c-6169-4e97-a167-1c168a8d1690" (UID: "a04a3a5c-6169-4e97-a167-1c168a8d1690"). InnerVolumeSpecName "kube-api-access-7rq9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.930592 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.959510 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq9g\" (UniqueName: \"kubernetes.io/projected/a04a3a5c-6169-4e97-a167-1c168a8d1690-kube-api-access-7rq9g\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:12 crc kubenswrapper[4858]: I1122 07:49:12.959564 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.042823 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1521-account-create-dnfjd"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.046590 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d464fcfc-b91d-45e8-8c90-18083a632351" (UID: "d464fcfc-b91d-45e8-8c90-18083a632351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.061007 4858 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 22 07:49:13 crc kubenswrapper[4858]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-22T07:49:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 22 07:49:13 crc kubenswrapper[4858]: /etc/init.d/functions: line 589: 792 Alarm clock "$@" Nov 22 07:49:13 crc kubenswrapper[4858]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-rm92c" message=< Nov 22 07:49:13 crc kubenswrapper[4858]: Exiting ovn-controller (1) [FAILED] Nov 22 07:49:13 crc kubenswrapper[4858]: Killing ovn-controller (1) [ OK ] Nov 22 07:49:13 crc kubenswrapper[4858]: Killing ovn-controller (1) with SIGKILL [ OK ] Nov 22 07:49:13 crc kubenswrapper[4858]: 2025-11-22T07:49:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 22 07:49:13 crc kubenswrapper[4858]: /etc/init.d/functions: line 589: 792 Alarm clock "$@" Nov 22 07:49:13 crc kubenswrapper[4858]: > Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.061115 4858 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 22 07:49:13 crc kubenswrapper[4858]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-22T07:49:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 22 07:49:13 crc kubenswrapper[4858]: /etc/init.d/functions: line 589: 792 Alarm clock "$@" Nov 22 07:49:13 crc kubenswrapper[4858]: > pod="openstack/ovn-controller-rm92c" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" containerID="cri-o://090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.061268 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-rm92c" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" containerID="cri-o://090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" gracePeriod=22 Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.061747 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.074272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutroneea0-account-delete-4d76b" event={"ID":"57b11c1e-be66-4546-bf19-b2a71c05256c","Type":"ContainerDied","Data":"0ffe925291d94a54dd7e74083ebb0526ce005621900a18e567256fec4d4052b8"} Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.074345 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ffe925291d94a54dd7e74083ebb0526ce005621900a18e567256fec4d4052b8" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.087633 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder1521-account-delete-m9vdj"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.099023 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-s8p7b"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.105788 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.163725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.165487 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.165587 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:17.165558164 +0000 UTC m=+2319.006981330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : configmap "openstack-scripts" not found Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.165841 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.182411 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1521-account-create-dnfjd"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.198611 4858 generic.go:334] "Generic (PLEG): container finished" podID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerID="6bf2d7b9ad4531e14c9327a6a63588e930346a2e2dcae212eff919b9b5b4719c" exitCode=0 Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.198732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5744c7f6cf-flhrq" event={"ID":"eaa777a2-4dd0-407d-b615-34d7fcd0845b","Type":"ContainerDied","Data":"6bf2d7b9ad4531e14c9327a6a63588e930346a2e2dcae212eff919b9b5b4719c"} Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.208012 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4d4fda9-31aa-46b8-983a-ffa32db2516c" containerID="4b2278b5a2b63a8809b3b18c14d3d73fbbf028ec81bae4f82dec2b606ada88b7" exitCode=0 Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.208122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b67c6cff8-nl4sb" event={"ID":"f4d4fda9-31aa-46b8-983a-ffa32db2516c","Type":"ContainerDied","Data":"4b2278b5a2b63a8809b3b18c14d3d73fbbf028ec81bae4f82dec2b606ada88b7"} Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.211651 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-s8p7b"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.255621 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.272503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a92d321-46e4-4291-8ac3-fc8f039b3dcf","Type":"ContainerDied","Data":"3dc2f74303a7ee7a0106c8ec299b4e1546de0a1dc8f162fb3675896f04439d91"} Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.272561 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc2f74303a7ee7a0106c8ec299b4e1546de0a1dc8f162fb3675896f04439d91" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.272960 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.273088 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.273160 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts podName:57b11c1e-be66-4546-bf19-b2a71c05256c nodeName:}" failed. No retries permitted until 2025-11-22 07:49:17.273137865 +0000 UTC m=+2319.114560871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts") pod "neutroneea0-account-delete-4d76b" (UID: "57b11c1e-be66-4546-bf19-b2a71c05256c") : configmap "openstack-scripts" not found Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.273820 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gmvcq"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.296735 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-config-data" (OuterVolumeSpecName: "config-data") pod "9023aa66-975c-44c6-8aba-cff06211fd31" (UID: "9023aa66-975c-44c6-8aba-cff06211fd31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.301551 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.303233 4858 generic.go:334] "Generic (PLEG): container finished" podID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerID="87dc9b2e06bc62a486c9c4668b5e0075930637436dc360e930cf4a1288e9f350" exitCode=0 Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.303492 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.305600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb1a203-c5d9-4ba5-b31b-c6134963af46","Type":"ContainerDied","Data":"87dc9b2e06bc62a486c9c4668b5e0075930637436dc360e930cf4a1288e9f350"} Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.305777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb1a203-c5d9-4ba5-b31b-c6134963af46","Type":"ContainerDied","Data":"6cfb57607d2c3f225692b0d2f9d43db8bd774cf8c6c30d64695e74df969988a4"} Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.305822 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cfb57607d2c3f225692b0d2f9d43db8bd774cf8c6c30d64695e74df969988a4" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.305979 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.306235 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.306473 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0446-account-delete-s8t8x" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.306982 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance2343-account-delete-nsnxt" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.308159 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f787bd646-rhtm4" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.308208 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.308879 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.308928 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-964b97968-m9n7r" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.308961 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.308997 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapid4cc-account-delete-5tdjd" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.309453 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.309513 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.309553 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.309644 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.320384 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gmvcq"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.355176 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923 is running failed: container process not found" containerID="090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.355735 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance2343-account-delete-nsnxt"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.355892 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923 is running failed: container process not found" containerID="090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.356378 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923 is running failed: container process not found" containerID="090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.356420 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-rm92c" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.365371 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-config-data" (OuterVolumeSpecName: "config-data") pod "3d1176a9-f83c-4c6e-8436-60b9affe0857" (UID: "3d1176a9-f83c-4c6e-8436-60b9affe0857"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.374240 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance2343-account-delete-nsnxt"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.378389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsrz\" (UniqueName: \"kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz\") pod \"keystonecf57-account-delete-khlk2\" (UID: \"c4d53767-86e9-4e1c-930d-0d92af7e62e0\") " pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.378988 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.379014 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.379028 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.387823 4858 projected.go:194] Error preparing data for projected volume kube-api-access-hpsrz for pod openstack/keystonecf57-account-delete-khlk2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.387941 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz podName:c4d53767-86e9-4e1c-930d-0d92af7e62e0 nodeName:}" failed. No retries permitted until 2025-11-22 07:49:17.387914288 +0000 UTC m=+2319.229337294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpsrz" (UniqueName: "kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz") pod "keystonecf57-account-delete-khlk2" (UID: "c4d53767-86e9-4e1c-930d-0d92af7e62e0") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.391479 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.393168 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2343-account-create-tlktz"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.403734 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.403983 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.417186 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.417301 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.422791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d1176a9-f83c-4c6e-8436-60b9affe0857" (UID: "3d1176a9-f83c-4c6e-8436-60b9affe0857"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.435044 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.435703 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.439390 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2343-account-create-tlktz"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.455925 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-586g4"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.474015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.474125 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-config-data" (OuterVolumeSpecName: "config-data") pod "d464fcfc-b91d-45e8-8c90-18083a632351" (UID: "d464fcfc-b91d-45e8-8c90-18083a632351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.480666 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-586g4"] Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.485628 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daa57087-ec21-4cff-aa47-68358e8f5039" (UID: "daa57087-ec21-4cff-aa47-68358e8f5039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.487032 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:13 crc kubenswrapper[4858]: E1122 07:49:13.487128 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.487953 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.487978 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.487994 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d464fcfc-b91d-45e8-8c90-18083a632351-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.488007 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa57087-ec21-4cff-aa47-68358e8f5039-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.488020 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.502617 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.590379 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.607055 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "9906e22d-4a3b-4ab7-86b7-2944b6af0f34" (UID: "9906e22d-4a3b-4ab7-86b7-2944b6af0f34"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.607269 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02115b03-d8fe-4334-96d6-cfbde07fd00a" (UID: "02115b03-d8fe-4334-96d6-cfbde07fd00a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.613449 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ea9610-8d23-4826-af0f-3b82ee456527" path="/var/lib/kubelet/pods/07ea9610-8d23-4826-af0f-3b82ee456527/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.614488 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f26956e-77a1-4cef-8fe2-1c5e398f7b96" path="/var/lib/kubelet/pods/2f26956e-77a1-4cef-8fe2-1c5e398f7b96/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.615181 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316e9e3f-ff34-4e81-9e22-aa5aa167ad9d" path="/var/lib/kubelet/pods/316e9e3f-ff34-4e81-9e22-aa5aa167ad9d/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.616591 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679c2346-5f5a-450e-b40d-1d371f1f8447" path="/var/lib/kubelet/pods/679c2346-5f5a-450e-b40d-1d371f1f8447/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.620003 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6936b381-bdd5-459a-b440-a1b6ae1aba52" path="/var/lib/kubelet/pods/6936b381-bdd5-459a-b440-a1b6ae1aba52/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.624398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9023aa66-975c-44c6-8aba-cff06211fd31" (UID: "9023aa66-975c-44c6-8aba-cff06211fd31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.629205 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04a3a5c-6169-4e97-a167-1c168a8d1690" path="/var/lib/kubelet/pods/a04a3a5c-6169-4e97-a167-1c168a8d1690/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.633281 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2adb039-7bf0-4b67-b6e5-28c0e7692ccb" path="/var/lib/kubelet/pods/e2adb039-7bf0-4b67-b6e5-28c0e7692ccb/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.634104 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66a0fa1-c84b-4b81-b5d9-3775d7dbd738" path="/var/lib/kubelet/pods/e66a0fa1-c84b-4b81-b5d9-3775d7dbd738/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.634758 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89df03d-10c4-4a66-80dd-272c6ba5a2ae" path="/var/lib/kubelet/pods/f89df03d-10c4-4a66-80dd-272c6ba5a2ae/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.635962 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5221eb-b8ee-4271-a2fe-627f0d08d2cf" path="/var/lib/kubelet/pods/ff5221eb-b8ee-4271-a2fe-627f0d08d2cf/volumes" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.660045 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "02115b03-d8fe-4334-96d6-cfbde07fd00a" (UID: "02115b03-d8fe-4334-96d6-cfbde07fd00a"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.692453 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.692501 4858 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.692513 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.692526 4858 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.735253 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9906e22d-4a3b-4ab7-86b7-2944b6af0f34" (UID: "9906e22d-4a3b-4ab7-86b7-2944b6af0f34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.764834 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.807368 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.807546 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e22d-4a3b-4ab7-86b7-2944b6af0f34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.892541 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.911547 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.923256 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.923299 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.928132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-config-data" (OuterVolumeSpecName: "config-data") pod "af987998-e4fb-4798-aaf5-6cb5f6a4670e" (UID: "af987998-e4fb-4798-aaf5-6cb5f6a4670e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.935744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "02115b03-d8fe-4334-96d6-cfbde07fd00a" (UID: "02115b03-d8fe-4334-96d6-cfbde07fd00a"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.937572 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.941656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3d1176a9-f83c-4c6e-8436-60b9affe0857" (UID: "3d1176a9-f83c-4c6e-8436-60b9affe0857"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4858]: I1122 07:49:13.981475 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data" (OuterVolumeSpecName: "config-data") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.000359 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-config-data" (OuterVolumeSpecName: "config-data") pod "a4127577-b995-4dfb-95d8-e089acc50fc9" (UID: "a4127577-b995-4dfb-95d8-e089acc50fc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.017913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d27a55dc-71d3-468f-b503-8436883c2771" (UID: "d27a55dc-71d3-468f-b503-8436883c2771"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.022638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9023aa66-975c-44c6-8aba-cff06211fd31" (UID: "9023aa66-975c-44c6-8aba-cff06211fd31"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026019 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026040 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af987998-e4fb-4798-aaf5-6cb5f6a4670e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026052 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4127577-b995-4dfb-95d8-e089acc50fc9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026061 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026073 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9023aa66-975c-44c6-8aba-cff06211fd31-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026084 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026092 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a55dc-71d3-468f-b503-8436883c2771-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.026103 4858 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02115b03-d8fe-4334-96d6-cfbde07fd00a-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.046792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3d1176a9-f83c-4c6e-8436-60b9affe0857" (UID: "3d1176a9-f83c-4c6e-8436-60b9affe0857"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.053199 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data" (OuterVolumeSpecName: "config-data") pod "a10b7a00-765d-465e-b80e-e795da936e68" (UID: "a10b7a00-765d-465e-b80e-e795da936e68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.098469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4d4e5cb5-ebc0-4cec-a53e-452efc26731b" (UID: "4d4e5cb5-ebc0-4cec-a53e-452efc26731b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.130175 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10b7a00-765d-465e-b80e-e795da936e68-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.130275 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1176a9-f83c-4c6e-8436-60b9affe0857-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.130289 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4e5cb5-ebc0-4cec-a53e-452efc26731b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.330770 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerID="a35f97adc654f8d53512934ced68b20cadeb39ebe2016eef17d8e1859247bf90" exitCode=0 Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.337312 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2/ovn-northd/0.log" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.337442 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" exitCode=139 Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.340311 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rm92c_4636a7e4-bda9-4b76-91ab-87ed6e121b50/ovn-controller/0.log" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.340358 4858 generic.go:334] "Generic (PLEG): container finished" podID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerID="090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" exitCode=137 Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.344022 4858 generic.go:334] "Generic (PLEG): container finished" podID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerID="459ed18256c6e74e65f42b2044fae1a1c6a3d48927d45cffc496a022915a3956" exitCode=0 Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4ec286aa-6594-4e36-b307-c8ffaa0e59de","Type":"ContainerDied","Data":"a35f97adc654f8d53512934ced68b20cadeb39ebe2016eef17d8e1859247bf90"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4ec286aa-6594-4e36-b307-c8ffaa0e59de","Type":"ContainerDied","Data":"a7a8a85beea11de66210f7a2b3bd0111d85ac468dafbb4ef22ffe263e58d9928"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428768 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a8a85beea11de66210f7a2b3bd0111d85ac468dafbb4ef22ffe263e58d9928" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428786 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutroneea0-account-delete-4d76b"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2","Type":"ContainerDied","Data":"6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428832 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2","Type":"ContainerDied","Data":"64b210afffbd52131ede91ace21e78d7c8c4a66ed4a668b6837b48294c99b069"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428844 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b210afffbd52131ede91ace21e78d7c8c4a66ed4a668b6837b48294c99b069" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428860 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-eea0-account-create-cvdw5"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428877 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c" event={"ID":"4636a7e4-bda9-4b76-91ab-87ed6e121b50","Type":"ContainerDied","Data":"090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rm92c" event={"ID":"4636a7e4-bda9-4b76-91ab-87ed6e121b50","Type":"ContainerDied","Data":"76540b03a4667f542bf36ad5cfa34e28814f1c6a239d85a470c6073986783595"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428909 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76540b03a4667f542bf36ad5cfa34e28814f1c6a239d85a470c6073986783595" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" event={"ID":"04d1b1fd-682c-499c-8f5b-f22d4513217a","Type":"ContainerDied","Data":"459ed18256c6e74e65f42b2044fae1a1c6a3d48927d45cffc496a022915a3956"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428947 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" event={"ID":"04d1b1fd-682c-499c-8f5b-f22d4513217a","Type":"ContainerDied","Data":"fb6a9d4d58deb91d9d36b843716d0349a508180b346b3c085291d8fc93c19c49"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428961 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6a9d4d58deb91d9d36b843716d0349a508180b346b3c085291d8fc93c19c49" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5744c7f6cf-flhrq" event={"ID":"eaa777a2-4dd0-407d-b615-34d7fcd0845b","Type":"ContainerDied","Data":"88949e6f1477e795fd131a83eb1924c3b44b3b497885b52a591771ba1a3d48f3"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.428990 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88949e6f1477e795fd131a83eb1924c3b44b3b497885b52a591771ba1a3d48f3" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b67c6cff8-nl4sb" event={"ID":"f4d4fda9-31aa-46b8-983a-ffa32db2516c","Type":"ContainerDied","Data":"520f9aba91bdcb850898081ba1e44197856612df9eaf0f1a4bd49d34d96bab94"} Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429017 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520f9aba91bdcb850898081ba1e44197856612df9eaf0f1a4bd49d34d96bab94" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429032 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-eea0-account-create-cvdw5"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429057 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lc8rx"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429076 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lc8rx"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429094 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9450-account-create-vlh78"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429109 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement9450-account-delete-9jrdm"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429128 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9450-account-create-vlh78"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429147 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xg4xq"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429165 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xg4xq"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429184 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0446-account-create-d8spp"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429199 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0446-account-create-d8spp"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429213 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican0446-account-delete-s8t8x"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429230 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican0446-account-delete-s8t8x"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429246 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-567gq"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429262 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-567gq"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429277 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4e0e-account-create-nqlr6"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429291 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell04e0e-account-delete-lp8zr"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.429307 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4e0e-account-create-nqlr6"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.430698 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ct822"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.430731 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ct822"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.430787 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d4cc-account-create-4qfzg"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.430809 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d4cc-account-create-4qfzg"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.430824 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapid4cc-account-delete-5tdjd"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.430840 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapid4cc-account-delete-5tdjd"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.526236 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.559539 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.599932 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.622801 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649158 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-tls\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649397 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-confd\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649457 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6rjw\" (UniqueName: \"kubernetes.io/projected/465c8e4d-cc9e-406b-8460-41e83f1dfadb-kube-api-access-m6rjw\") pod \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649481 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvwfz\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-kube-api-access-mvwfz\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-pod-info\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649633 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67djw\" (UniqueName: \"kubernetes.io/projected/31c63759-4028-4b22-acb3-c9c78f9cbfce-kube-api-access-67djw\") pod \"31c63759-4028-4b22-acb3-c9c78f9cbfce\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649673 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-server-conf\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-plugins-conf\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-plugins\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfqwh\" (UniqueName: \"kubernetes.io/projected/d8be274c-bb8a-43d2-8a56-dacb6789d343-kube-api-access-hfqwh\") pod \"d8be274c-bb8a-43d2-8a56-dacb6789d343\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.649989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31c63759-4028-4b22-acb3-c9c78f9cbfce-operator-scripts\") pod \"31c63759-4028-4b22-acb3-c9c78f9cbfce\" (UID: \"31c63759-4028-4b22-acb3-c9c78f9cbfce\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.650030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.650097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465c8e4d-cc9e-406b-8460-41e83f1dfadb-operator-scripts\") pod \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\" (UID: \"465c8e4d-cc9e-406b-8460-41e83f1dfadb\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.650155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.650184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-erlang-cookie-secret\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.650376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-erlang-cookie\") pod \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\" (UID: \"2a92d321-46e4-4291-8ac3-fc8f039b3dcf\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.650456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8be274c-bb8a-43d2-8a56-dacb6789d343-operator-scripts\") pod \"d8be274c-bb8a-43d2-8a56-dacb6789d343\" (UID: \"d8be274c-bb8a-43d2-8a56-dacb6789d343\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.651043 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.659827 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.660876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-pod-info" (OuterVolumeSpecName: "pod-info") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.662567 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.663698 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8be274c-bb8a-43d2-8a56-dacb6789d343-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8be274c-bb8a-43d2-8a56-dacb6789d343" (UID: "d8be274c-bb8a-43d2-8a56-dacb6789d343"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.666675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465c8e4d-cc9e-406b-8460-41e83f1dfadb-kube-api-access-m6rjw" (OuterVolumeSpecName: "kube-api-access-m6rjw") pod "465c8e4d-cc9e-406b-8460-41e83f1dfadb" (UID: "465c8e4d-cc9e-406b-8460-41e83f1dfadb"). InnerVolumeSpecName "kube-api-access-m6rjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.667724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c63759-4028-4b22-acb3-c9c78f9cbfce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31c63759-4028-4b22-acb3-c9c78f9cbfce" (UID: "31c63759-4028-4b22-acb3-c9c78f9cbfce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.667781 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/465c8e4d-cc9e-406b-8460-41e83f1dfadb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "465c8e4d-cc9e-406b-8460-41e83f1dfadb" (UID: "465c8e4d-cc9e-406b-8460-41e83f1dfadb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.668159 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.671583 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.678197 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-kube-api-access-mvwfz" (OuterVolumeSpecName: "kube-api-access-mvwfz") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "kube-api-access-mvwfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.681198 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c63759-4028-4b22-acb3-c9c78f9cbfce-kube-api-access-67djw" (OuterVolumeSpecName: "kube-api-access-67djw") pod "31c63759-4028-4b22-acb3-c9c78f9cbfce" (UID: "31c63759-4028-4b22-acb3-c9c78f9cbfce"). InnerVolumeSpecName "kube-api-access-67djw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.688792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8be274c-bb8a-43d2-8a56-dacb6789d343-kube-api-access-hfqwh" (OuterVolumeSpecName: "kube-api-access-hfqwh") pod "d8be274c-bb8a-43d2-8a56-dacb6789d343" (UID: "d8be274c-bb8a-43d2-8a56-dacb6789d343"). InnerVolumeSpecName "kube-api-access-hfqwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.690488 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.716787 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-964b97968-m9n7r"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.728858 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-964b97968-m9n7r"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.744994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.760684 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.765973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-server-conf" (OuterVolumeSpecName: "server-conf") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.767910 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.767949 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8be274c-bb8a-43d2-8a56-dacb6789d343-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.767959 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.767974 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6rjw\" (UniqueName: \"kubernetes.io/projected/465c8e4d-cc9e-406b-8460-41e83f1dfadb-kube-api-access-m6rjw\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.767988 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvwfz\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-kube-api-access-mvwfz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.767998 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768009 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67djw\" (UniqueName: \"kubernetes.io/projected/31c63759-4028-4b22-acb3-c9c78f9cbfce-kube-api-access-67djw\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768017 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768029 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768040 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfqwh\" (UniqueName: \"kubernetes.io/projected/d8be274c-bb8a-43d2-8a56-dacb6789d343-kube-api-access-hfqwh\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768052 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31c63759-4028-4b22-acb3-c9c78f9cbfce-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768130 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465c8e4d-cc9e-406b-8460-41e83f1dfadb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768167 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.768182 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.769788 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.775036 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data" (OuterVolumeSpecName: "config-data") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.779848 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.792775 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.815804 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.824585 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.834135 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.835633 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.845411 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.850314 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.854691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2a92d321-46e4-4291-8ac3-fc8f039b3dcf" (UID: "2a92d321-46e4-4291-8ac3-fc8f039b3dcf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.857099 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.870027 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.870095 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a92d321-46e4-4291-8ac3-fc8f039b3dcf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.870112 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.899011 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.938581 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.938787 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.976665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb1a203-c5d9-4ba5-b31b-c6134963af46-pod-info\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.976729 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-tls\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.976881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.976959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts\") pod \"57b11c1e-be66-4546-bf19-b2a71c05256c\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.977047 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.977110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2bx7\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-kube-api-access-w2bx7\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.977165 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-erlang-cookie\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.977259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-plugins\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.977344 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzcgc\" (UniqueName: \"kubernetes.io/projected/57b11c1e-be66-4546-bf19-b2a71c05256c-kube-api-access-dzcgc\") pod \"57b11c1e-be66-4546-bf19-b2a71c05256c\" (UID: \"57b11c1e-be66-4546-bf19-b2a71c05256c\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.977503 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-plugins-conf\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.978716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.978888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-confd\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.978953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-server-conf\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.978994 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb1a203-c5d9-4ba5-b31b-c6134963af46-erlang-cookie-secret\") pod \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\" (UID: \"ddb1a203-c5d9-4ba5-b31b-c6134963af46\") " Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.980266 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.981934 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57b11c1e-be66-4546-bf19-b2a71c05256c" (UID: "57b11c1e-be66-4546-bf19-b2a71c05256c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.982977 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.983266 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb1a203-c5d9-4ba5-b31b-c6134963af46-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.983649 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.983969 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.985955 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-kube-api-access-w2bx7" (OuterVolumeSpecName: "kube-api-access-w2bx7") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "kube-api-access-w2bx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.995528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4858]: I1122 07:49:14.995756 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.025857 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ddb1a203-c5d9-4ba5-b31b-c6134963af46-pod-info" (OuterVolumeSpecName: "pod-info") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.030006 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.051180 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.053653 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b11c1e-be66-4546-bf19-b2a71c05256c-kube-api-access-dzcgc" (OuterVolumeSpecName: "kube-api-access-dzcgc") pod "57b11c1e-be66-4546-bf19-b2a71c05256c" (UID: "57b11c1e-be66-4546-bf19-b2a71c05256c"). InnerVolumeSpecName "kube-api-access-dzcgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.057592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.061019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data" (OuterVolumeSpecName: "config-data") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.070229 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.081567 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-public-tls-certs\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.081662 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data-custom\") pod \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.081704 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-credential-keys\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.081818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data\") pod \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.081881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-fernet-keys\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.081931 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa777a2-4dd0-407d-b615-34d7fcd0845b-logs\") pod \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-combined-ca-bundle\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nnbz\" (UniqueName: \"kubernetes.io/projected/f4d4fda9-31aa-46b8-983a-ffa32db2516c-kube-api-access-9nnbz\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082154 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-combined-ca-bundle\") pod \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-config-data\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn8bl\" (UniqueName: \"kubernetes.io/projected/eaa777a2-4dd0-407d-b615-34d7fcd0845b-kube-api-access-dn8bl\") pod \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\" (UID: \"eaa777a2-4dd0-407d-b615-34d7fcd0845b\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-scripts\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082428 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-internal-tls-certs\") pod \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\" (UID: \"f4d4fda9-31aa-46b8-983a-ffa32db2516c\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082973 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.082998 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57b11c1e-be66-4546-bf19-b2a71c05256c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083013 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083044 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2bx7\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-kube-api-access-w2bx7\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083056 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083072 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzcgc\" (UniqueName: \"kubernetes.io/projected/57b11c1e-be66-4546-bf19-b2a71c05256c-kube-api-access-dzcgc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083084 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083097 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb1a203-c5d9-4ba5-b31b-c6134963af46-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083107 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb1a203-c5d9-4ba5-b31b-c6134963af46-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.083120 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.085808 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2/ovn-northd/0.log" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.085938 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.089701 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eaa777a2-4dd0-407d-b615-34d7fcd0845b" (UID: "eaa777a2-4dd0-407d-b615-34d7fcd0845b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.103095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d4fda9-31aa-46b8-983a-ffa32db2516c-kube-api-access-9nnbz" (OuterVolumeSpecName: "kube-api-access-9nnbz") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "kube-api-access-9nnbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.103893 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa777a2-4dd0-407d-b615-34d7fcd0845b-logs" (OuterVolumeSpecName: "logs") pod "eaa777a2-4dd0-407d-b615-34d7fcd0845b" (UID: "eaa777a2-4dd0-407d-b615-34d7fcd0845b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.106376 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.116401 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.141093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.142502 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-scripts" (OuterVolumeSpecName: "scripts") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.143431 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rm92c_4636a7e4-bda9-4b76-91ab-87ed6e121b50/ovn-controller/0.log" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.144060 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.144555 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.152887 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.155209 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.162437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa777a2-4dd0-407d-b615-34d7fcd0845b-kube-api-access-dn8bl" (OuterVolumeSpecName: "kube-api-access-dn8bl") pod "eaa777a2-4dd0-407d-b615-34d7fcd0845b" (UID: "eaa777a2-4dd0-407d-b615-34d7fcd0845b"). InnerVolumeSpecName "kube-api-access-dn8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.166183 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-combined-ca-bundle\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184708 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4636a7e4-bda9-4b76-91ab-87ed6e121b50-scripts\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-combined-ca-bundle\") pod \"04d1b1fd-682c-499c-8f5b-f22d4513217a\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184798 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-operator-scripts\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-rundir\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d1b1fd-682c-499c-8f5b-f22d4513217a-logs\") pod \"04d1b1fd-682c-499c-8f5b-f22d4513217a\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqnlz\" (UniqueName: \"kubernetes.io/projected/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kube-api-access-lqnlz\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-log-ovn\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184898 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-config\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184939 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data-custom\") pod \"04d1b1fd-682c-499c-8f5b-f22d4513217a\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.184981 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-combined-ca-bundle\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data\") pod \"04d1b1fd-682c-499c-8f5b-f22d4513217a\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-combined-ca-bundle\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-generated\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185220 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-default\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-galera-tls-certs\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74h2w\" (UniqueName: \"kubernetes.io/projected/04d1b1fd-682c-499c-8f5b-f22d4513217a-kube-api-access-74h2w\") pod \"04d1b1fd-682c-499c-8f5b-f22d4513217a\" (UID: \"04d1b1fd-682c-499c-8f5b-f22d4513217a\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-northd-tls-certs\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185532 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmwr\" (UniqueName: \"kubernetes.io/projected/4636a7e4-bda9-4b76-91ab-87ed6e121b50-kube-api-access-2dmwr\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185564 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run-ovn\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185616 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-scripts\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kolla-config\") pod \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\" (UID: \"4ec286aa-6594-4e36-b307-c8ffaa0e59de\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-ovn-controller-tls-certs\") pod \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\" (UID: \"4636a7e4-bda9-4b76-91ab-87ed6e121b50\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.185713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw68r\" (UniqueName: \"kubernetes.io/projected/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-kube-api-access-nw68r\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186152 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186170 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa777a2-4dd0-407d-b615-34d7fcd0845b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186182 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nnbz\" (UniqueName: \"kubernetes.io/projected/f4d4fda9-31aa-46b8-983a-ffa32db2516c-kube-api-access-9nnbz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186195 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn8bl\" (UniqueName: \"kubernetes.io/projected/eaa777a2-4dd0-407d-b615-34d7fcd0845b-kube-api-access-dn8bl\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186206 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186217 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186226 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.186965 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.187952 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04d1b1fd-682c-499c-8f5b-f22d4513217a-logs" (OuterVolumeSpecName: "logs") pod "04d1b1fd-682c-499c-8f5b-f22d4513217a" (UID: "04d1b1fd-682c-499c-8f5b-f22d4513217a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.188093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-config" (OuterVolumeSpecName: "config") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.188774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.188908 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run" (OuterVolumeSpecName: "var-run") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.189232 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.200407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-scripts" (OuterVolumeSpecName: "scripts") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.201312 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.201505 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.201524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.201988 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.202652 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.211461 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4636a7e4-bda9-4b76-91ab-87ed6e121b50-scripts" (OuterVolumeSpecName: "scripts") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.220865 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.221643 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-server-conf" (OuterVolumeSpecName: "server-conf") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.222704 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.233457 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f787bd646-rhtm4"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.237941 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kube-api-access-lqnlz" (OuterVolumeSpecName: "kube-api-access-lqnlz") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "kube-api-access-lqnlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.240777 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-kube-api-access-nw68r" (OuterVolumeSpecName: "kube-api-access-nw68r") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "kube-api-access-nw68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.243432 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f787bd646-rhtm4"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.269732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04d1b1fd-682c-499c-8f5b-f22d4513217a" (UID: "04d1b1fd-682c-499c-8f5b-f22d4513217a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.270047 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d1b1fd-682c-499c-8f5b-f22d4513217a-kube-api-access-74h2w" (OuterVolumeSpecName: "kube-api-access-74h2w") pod "04d1b1fd-682c-499c-8f5b-f22d4513217a" (UID: "04d1b1fd-682c-499c-8f5b-f22d4513217a"). InnerVolumeSpecName "kube-api-access-74h2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.273115 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-config-data" (OuterVolumeSpecName: "config-data") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.288905 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4636a7e4-bda9-4b76-91ab-87ed6e121b50-kube-api-access-2dmwr" (OuterVolumeSpecName: "kube-api-access-2dmwr") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "kube-api-access-2dmwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.294983 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw68r\" (UniqueName: \"kubernetes.io/projected/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-kube-api-access-nw68r\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295039 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4636a7e4-bda9-4b76-91ab-87ed6e121b50-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295055 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295072 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295086 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d1b1fd-682c-499c-8f5b-f22d4513217a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295099 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqnlz\" (UniqueName: \"kubernetes.io/projected/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kube-api-access-lqnlz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295110 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295121 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295133 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295147 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295158 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb1a203-c5d9-4ba5-b31b-c6134963af46-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295170 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295184 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295198 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295210 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74h2w\" (UniqueName: \"kubernetes.io/projected/04d1b1fd-682c-499c-8f5b-f22d4513217a-kube-api-access-74h2w\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295224 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295235 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4636a7e4-bda9-4b76-91ab-87ed6e121b50-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295246 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.295259 4858 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4ec286aa-6594-4e36-b307-c8ffaa0e59de-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.299002 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.299900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.313467 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.313551 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.318605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.327673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa777a2-4dd0-407d-b615-34d7fcd0845b" (UID: "eaa777a2-4dd0-407d-b615-34d7fcd0845b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.331095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ddb1a203-c5d9-4ba5-b31b-c6134963af46" (UID: "ddb1a203-c5d9-4ba5-b31b-c6134963af46"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.359219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.360980 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04d1b1fd-682c-499c-8f5b-f22d4513217a" (UID: "04d1b1fd-682c-499c-8f5b-f22d4513217a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.366685 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonecf57-account-delete-khlk2" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.366902 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell04e0e-account-delete-lp8zr" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.366964 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement9450-account-delete-9jrdm" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.367232 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.367557 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rm92c" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.367841 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1521-account-delete-m9vdj" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.367889 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57cdc95956-lbjhn" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.367847 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.367847 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5744c7f6cf-flhrq" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.368150 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.368184 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b67c6cff8-nl4sb" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.368098 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutroneea0-account-delete-4d76b" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.371038 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398580 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data" (OuterVolumeSpecName: "config-data") pod "eaa777a2-4dd0-407d-b615-34d7fcd0845b" (UID: "eaa777a2-4dd0-407d-b615-34d7fcd0845b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398890 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398911 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dmwr\" (UniqueName: \"kubernetes.io/projected/4636a7e4-bda9-4b76-91ab-87ed6e121b50-kube-api-access-2dmwr\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398924 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398938 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398948 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398959 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb1a203-c5d9-4ba5-b31b-c6134963af46-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398968 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.398979 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa777a2-4dd0-407d-b615-34d7fcd0845b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.399010 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.417015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.421539 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.431152 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.435837 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f4d4fda9-31aa-46b8-983a-ffa32db2516c" (UID: "f4d4fda9-31aa-46b8-983a-ffa32db2516c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.443815 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data" (OuterVolumeSpecName: "config-data") pod "04d1b1fd-682c-499c-8f5b-f22d4513217a" (UID: "04d1b1fd-682c-499c-8f5b-f22d4513217a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.458455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "4ec286aa-6594-4e36-b307-c8ffaa0e59de" (UID: "4ec286aa-6594-4e36-b307-c8ffaa0e59de"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.499918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.502276 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs\") pod \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\" (UID: \"0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2\") " Nov 22 07:49:15 crc kubenswrapper[4858]: W1122 07:49:15.503269 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2/volumes/kubernetes.io~secret/metrics-certs-tls-certs Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.503395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504609 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504633 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504647 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d1b1fd-682c-499c-8f5b-f22d4513217a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504656 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504667 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504676 4858 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec286aa-6594-4e36-b307-c8ffaa0e59de-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.504685 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4d4fda9-31aa-46b8-983a-ffa32db2516c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.524226 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "4636a7e4-bda9-4b76-91ab-87ed6e121b50" (UID: "4636a7e4-bda9-4b76-91ab-87ed6e121b50"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.528656 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.533087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" (UID: "0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.558910 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.581356 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02115b03-d8fe-4334-96d6-cfbde07fd00a" path="/var/lib/kubelet/pods/02115b03-d8fe-4334-96d6-cfbde07fd00a/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.581962 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb292c6-b1bc-4c62-a3a5-753730fcd643" path="/var/lib/kubelet/pods/0eb292c6-b1bc-4c62-a3a5-753730fcd643/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.582549 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c72c65-31d4-4eed-bf9d-9358c14642ec" path="/var/lib/kubelet/pods/22c72c65-31d4-4eed-bf9d-9358c14642ec/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.583764 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" path="/var/lib/kubelet/pods/3d1176a9-f83c-4c6e-8436-60b9affe0857/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.584525 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" path="/var/lib/kubelet/pods/4d4e5cb5-ebc0-4cec-a53e-452efc26731b/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.585306 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632b42f8-37dd-4569-87f0-a7a6f9a802f0" path="/var/lib/kubelet/pods/632b42f8-37dd-4569-87f0-a7a6f9a802f0/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.597432 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5c6a5d-7db6-4083-97b4-6868dc190b66" path="/var/lib/kubelet/pods/7b5c6a5d-7db6-4083-97b4-6868dc190b66/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.598585 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8558e5a0-abd5-4634-82b1-dfd995b12ace" path="/var/lib/kubelet/pods/8558e5a0-abd5-4634-82b1-dfd995b12ace/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.599387 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" path="/var/lib/kubelet/pods/9023aa66-975c-44c6-8aba-cff06211fd31/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.600774 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964bc658-f627-428c-9dbd-dd640e9394bc" path="/var/lib/kubelet/pods/964bc658-f627-428c-9dbd-dd640e9394bc/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.601938 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" path="/var/lib/kubelet/pods/9906e22d-4a3b-4ab7-86b7-2944b6af0f34/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.602850 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10b7a00-765d-465e-b80e-e795da936e68" path="/var/lib/kubelet/pods/a10b7a00-765d-465e-b80e-e795da936e68/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.604210 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1dd36f7-e035-455e-92a8-9bf84fdb8829" path="/var/lib/kubelet/pods/a1dd36f7-e035-455e-92a8-9bf84fdb8829/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.606931 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.606965 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4636a7e4-bda9-4b76-91ab-87ed6e121b50-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.608626 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" path="/var/lib/kubelet/pods/a4127577-b995-4dfb-95d8-e089acc50fc9/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.609987 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa36d9bc-2f0d-44bf-97d2-cc8785002875" path="/var/lib/kubelet/pods/aa36d9bc-2f0d-44bf-97d2-cc8785002875/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.612047 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" path="/var/lib/kubelet/pods/af987998-e4fb-4798-aaf5-6cb5f6a4670e/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.613196 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e1b01b-da13-4121-8257-60e0fbbca27c" path="/var/lib/kubelet/pods/b5e1b01b-da13-4121-8257-60e0fbbca27c/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.613946 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27a55dc-71d3-468f-b503-8436883c2771" path="/var/lib/kubelet/pods/d27a55dc-71d3-468f-b503-8436883c2771/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.615410 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3afe9df-46ed-4387-a69d-ca42dc63b199" path="/var/lib/kubelet/pods/d3afe9df-46ed-4387-a69d-ca42dc63b199/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.618820 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d464fcfc-b91d-45e8-8c90-18083a632351" path="/var/lib/kubelet/pods/d464fcfc-b91d-45e8-8c90-18083a632351/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.619456 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" path="/var/lib/kubelet/pods/daa57087-ec21-4cff-aa47-68358e8f5039/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.620655 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11e22df-3691-484e-a21d-906038a0eea8" path="/var/lib/kubelet/pods/f11e22df-3691-484e-a21d-906038a0eea8/volumes" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.622935 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.622986 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement9450-account-delete-9jrdm"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.623008 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement9450-account-delete-9jrdm"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.643075 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder1521-account-delete-m9vdj"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.651035 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder1521-account-delete-m9vdj"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.673176 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell04e0e-account-delete-lp8zr"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.686102 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell04e0e-account-delete-lp8zr"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.700637 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutroneea0-account-delete-4d76b"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.713800 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutroneea0-account-delete-4d76b"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.764552 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystonecf57-account-delete-khlk2"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.774663 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystonecf57-account-delete-khlk2"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.798785 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.810354 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpsrz\" (UniqueName: \"kubernetes.io/projected/c4d53767-86e9-4e1c-930d-0d92af7e62e0-kube-api-access-hpsrz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.810416 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4d53767-86e9-4e1c-930d-0d92af7e62e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.812298 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:49:15 crc kubenswrapper[4858]: I1122 07:49:15.991540 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rm92c"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.023831 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rm92c"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.036955 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5744c7f6cf-flhrq"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.060821 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5744c7f6cf-flhrq"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.079641 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-57cdc95956-lbjhn"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.089045 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-57cdc95956-lbjhn"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.097179 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.105626 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.112598 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.127094 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.135299 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7b67c6cff8-nl4sb"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.144453 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7b67c6cff8-nl4sb"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.318512 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.383989 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerID="4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab" exitCode=0 Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.384059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerDied","Data":"4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab"} Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.384107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98","Type":"ContainerDied","Data":"a53746aebccb4cd57132e990003d213470f70263021d3e96deb4e1b50fc1dcb9"} Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.384138 4858 scope.go:117] "RemoveContainer" containerID="36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.384091 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.417945 4858 scope.go:117] "RemoveContainer" containerID="d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-scripts\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425784 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-config-data\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z86pf\" (UniqueName: \"kubernetes.io/projected/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-kube-api-access-z86pf\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425883 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-sg-core-conf-yaml\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-run-httpd\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425929 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-log-httpd\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.425960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-ceilometer-tls-certs\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.426013 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-combined-ca-bundle\") pod \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\" (UID: \"8ccd3a3a-6077-4b71-a6ac-a9289bb59b98\") " Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.427946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.428095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.433945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-scripts" (OuterVolumeSpecName: "scripts") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.447448 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-kube-api-access-z86pf" (OuterVolumeSpecName: "kube-api-access-z86pf") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "kube-api-access-z86pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.452448 4858 scope.go:117] "RemoveContainer" containerID="4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.468229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.501592 4858 scope.go:117] "RemoveContainer" containerID="b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.501827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.513607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.528232 4858 scope.go:117] "RemoveContainer" containerID="36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64" Nov 22 07:49:16 crc kubenswrapper[4858]: E1122 07:49:16.529220 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64\": container with ID starting with 36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64 not found: ID does not exist" containerID="36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.529452 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64"} err="failed to get container status \"36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64\": rpc error: code = NotFound desc = could not find container \"36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64\": container with ID starting with 36b809a07f2d1865f4b58be5ba0eacda9a990c069ae1bd51ca4260e00e5f2d64 not found: ID does not exist" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.529551 4858 scope.go:117] "RemoveContainer" containerID="d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.529589 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.529768 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z86pf\" (UniqueName: \"kubernetes.io/projected/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-kube-api-access-z86pf\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.529998 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.530101 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.530185 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.530265 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.530394 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: E1122 07:49:16.530813 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d\": container with ID starting with d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d not found: ID does not exist" containerID="d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.530911 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d"} err="failed to get container status \"d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d\": rpc error: code = NotFound desc = could not find container \"d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d\": container with ID starting with d006fa6935040b2221208df4ec654e93e473971526d24904dbf5b0bfa23bef8d not found: ID does not exist" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.530977 4858 scope.go:117] "RemoveContainer" containerID="4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab" Nov 22 07:49:16 crc kubenswrapper[4858]: E1122 07:49:16.531312 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab\": container with ID starting with 4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab not found: ID does not exist" containerID="4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.531480 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab"} err="failed to get container status \"4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab\": rpc error: code = NotFound desc = could not find container \"4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab\": container with ID starting with 4af8a531d10dc507d00e8c77efbbf93a911cffeb7c733d448c770e8b688ad9ab not found: ID does not exist" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.531570 4858 scope.go:117] "RemoveContainer" containerID="b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663" Nov 22 07:49:16 crc kubenswrapper[4858]: E1122 07:49:16.532056 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663\": container with ID starting with b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663 not found: ID does not exist" containerID="b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.532128 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663"} err="failed to get container status \"b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663\": rpc error: code = NotFound desc = could not find container \"b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663\": container with ID starting with b2979c0040f3ddebd4b51e2132acc0a2b2bc9289fd4ae198b0b5b2254ce16663 not found: ID does not exist" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.535451 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-config-data" (OuterVolumeSpecName: "config-data") pod "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" (UID: "8ccd3a3a-6077-4b71-a6ac-a9289bb59b98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.632739 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.748472 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.757571 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:16 crc kubenswrapper[4858]: I1122 07:49:16.852912 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-56cfd7c4f7-gvswl" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.182:9696/\": dial tcp 10.217.0.182:9696: connect: connection refused" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.552737 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" path="/var/lib/kubelet/pods/04d1b1fd-682c-499c-8f5b-f22d4513217a/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.553801 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" path="/var/lib/kubelet/pods/0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.555260 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" path="/var/lib/kubelet/pods/2a92d321-46e4-4291-8ac3-fc8f039b3dcf/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.555857 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c63759-4028-4b22-acb3-c9c78f9cbfce" path="/var/lib/kubelet/pods/31c63759-4028-4b22-acb3-c9c78f9cbfce/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.556446 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" path="/var/lib/kubelet/pods/4636a7e4-bda9-4b76-91ab-87ed6e121b50/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.557161 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465c8e4d-cc9e-406b-8460-41e83f1dfadb" path="/var/lib/kubelet/pods/465c8e4d-cc9e-406b-8460-41e83f1dfadb/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.558525 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" path="/var/lib/kubelet/pods/4ec286aa-6594-4e36-b307-c8ffaa0e59de/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.559180 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b11c1e-be66-4546-bf19-b2a71c05256c" path="/var/lib/kubelet/pods/57b11c1e-be66-4546-bf19-b2a71c05256c/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.560310 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" path="/var/lib/kubelet/pods/8ccd3a3a-6077-4b71-a6ac-a9289bb59b98/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.561039 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d53767-86e9-4e1c-930d-0d92af7e62e0" path="/var/lib/kubelet/pods/c4d53767-86e9-4e1c-930d-0d92af7e62e0/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.561541 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8be274c-bb8a-43d2-8a56-dacb6789d343" path="/var/lib/kubelet/pods/d8be274c-bb8a-43d2-8a56-dacb6789d343/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.562795 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" path="/var/lib/kubelet/pods/ddb1a203-c5d9-4ba5-b31b-c6134963af46/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.563512 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" path="/var/lib/kubelet/pods/eaa777a2-4dd0-407d-b615-34d7fcd0845b/volumes" Nov 22 07:49:17 crc kubenswrapper[4858]: I1122 07:49:17.564184 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d4fda9-31aa-46b8-983a-ffa32db2516c" path="/var/lib/kubelet/pods/f4d4fda9-31aa-46b8-983a-ffa32db2516c/volumes" Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.386365 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.386985 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.387361 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.387392 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.389398 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.391089 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.394280 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:18 crc kubenswrapper[4858]: E1122 07:49:18.394347 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.224826 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.313873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8544d\" (UniqueName: \"kubernetes.io/projected/555cf9f2-a18e-4b84-b360-d03c7e0d0821-kube-api-access-8544d\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.313961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-httpd-config\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.314012 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-config\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.314069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-internal-tls-certs\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.314096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-ovndb-tls-certs\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.314115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-public-tls-certs\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.315213 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-combined-ca-bundle\") pod \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\" (UID: \"555cf9f2-a18e-4b84-b360-d03c7e0d0821\") " Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.321534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.322259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555cf9f2-a18e-4b84-b360-d03c7e0d0821-kube-api-access-8544d" (OuterVolumeSpecName: "kube-api-access-8544d") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "kube-api-access-8544d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.362306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.363420 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-config" (OuterVolumeSpecName: "config") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.368657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.378178 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.382039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "555cf9f2-a18e-4b84-b360-d03c7e0d0821" (UID: "555cf9f2-a18e-4b84-b360-d03c7e0d0821"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417457 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417516 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8544d\" (UniqueName: \"kubernetes.io/projected/555cf9f2-a18e-4b84-b360-d03c7e0d0821-kube-api-access-8544d\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417531 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417539 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417548 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417556 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.417565 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/555cf9f2-a18e-4b84-b360-d03c7e0d0821-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.437284 4858 generic.go:334] "Generic (PLEG): container finished" podID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerID="721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a" exitCode=0 Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.437393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56cfd7c4f7-gvswl" event={"ID":"555cf9f2-a18e-4b84-b360-d03c7e0d0821","Type":"ContainerDied","Data":"721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a"} Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.437435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56cfd7c4f7-gvswl" event={"ID":"555cf9f2-a18e-4b84-b360-d03c7e0d0821","Type":"ContainerDied","Data":"3cf09d0702349f2485c683260afed37dc42a8928e4e5cd19678ca8afa92abd57"} Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.437684 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56cfd7c4f7-gvswl" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.437464 4858 scope.go:117] "RemoveContainer" containerID="74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.542391 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56cfd7c4f7-gvswl"] Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.545106 4858 scope.go:117] "RemoveContainer" containerID="721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.556860 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56cfd7c4f7-gvswl"] Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.630581 4858 scope.go:117] "RemoveContainer" containerID="74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55" Nov 22 07:49:20 crc kubenswrapper[4858]: E1122 07:49:20.633830 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55\": container with ID starting with 74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55 not found: ID does not exist" containerID="74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.633886 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55"} err="failed to get container status \"74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55\": rpc error: code = NotFound desc = could not find container \"74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55\": container with ID starting with 74d84fa8451a3d4920454e26b0771eec7c0dc0ac1f7ab36918052f428e499d55 not found: ID does not exist" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.633921 4858 scope.go:117] "RemoveContainer" containerID="721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a" Nov 22 07:49:20 crc kubenswrapper[4858]: E1122 07:49:20.637890 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a\": container with ID starting with 721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a not found: ID does not exist" containerID="721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a" Nov 22 07:49:20 crc kubenswrapper[4858]: I1122 07:49:20.637946 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a"} err="failed to get container status \"721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a\": rpc error: code = NotFound desc = could not find container \"721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a\": container with ID starting with 721cbf5e0114e6934d01c92b59ee9c79292efe5c7aec22fadedf8f916636219a not found: ID does not exist" Nov 22 07:49:21 crc kubenswrapper[4858]: I1122 07:49:21.563773 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" path="/var/lib/kubelet/pods/555cf9f2-a18e-4b84-b360-d03c7e0d0821/volumes" Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.386606 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.387403 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.387870 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.387991 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.389582 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.391700 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.393073 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:23 crc kubenswrapper[4858]: E1122 07:49:23.393125 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.012118 4858 scope.go:117] "RemoveContainer" containerID="0be496c05b6ca9bbc0552d43b838acc7ab82ea2f2a395f854baaaaee0619ac0a" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.063676 4858 scope.go:117] "RemoveContainer" containerID="6cfc782bc9520723da7c7f7601da4f5f0ce94cfc24b0de5b5732d60079098d09" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.088165 4858 scope.go:117] "RemoveContainer" containerID="01bded6dc21a4fd246c2c6f00a02bab06b43ba88276bd0abc3233f17785ed65c" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.125563 4858 scope.go:117] "RemoveContainer" containerID="52223809a6d6bfb7225e42121de5c27970a68606da724fbdc5f05682783c72f0" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.157721 4858 scope.go:117] "RemoveContainer" containerID="e861834850f977b602f18ed8d17b254529dd837b73735c0bbac78e6b2b23be6f" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.192301 4858 scope.go:117] "RemoveContainer" containerID="9a14ba256974b4e536774cfc054ad26464c059adcc22a7bb717825b06118eb03" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.221408 4858 scope.go:117] "RemoveContainer" containerID="fb36d8654ae3d6515feade85100caacf66c69e3e46dec05899254d5dc28b6d08" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.249663 4858 scope.go:117] "RemoveContainer" containerID="87dc9b2e06bc62a486c9c4668b5e0075930637436dc360e930cf4a1288e9f350" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.272357 4858 scope.go:117] "RemoveContainer" containerID="92b0cb42168f7f97d3cfb66cdb73d033c460b257408d844abc6d96bfd9bb9a4d" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.295363 4858 scope.go:117] "RemoveContainer" containerID="f20acacb794a33f3c4580766d27a38e6353236383e5589415e8e4d4c9d95c565" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.328736 4858 scope.go:117] "RemoveContainer" containerID="a1cf1344d8fa4a530a9c19077eaef6e03fd43ff1247eeb398c4df12950f2881c" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.354490 4858 scope.go:117] "RemoveContainer" containerID="4268d28e0d7669552cd784717affef98db42d1f02dd3a45710ff4af9661f0dec" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.384384 4858 scope.go:117] "RemoveContainer" containerID="67f96849e31d122e4179b6efb15731fb368f96111e60368078186e3ff4dfdd2c" Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.386178 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.387058 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.387549 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.387592 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.388015 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.390155 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.392259 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.392374 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.421585 4858 scope.go:117] "RemoveContainer" containerID="4b2278b5a2b63a8809b3b18c14d3d73fbbf028ec81bae4f82dec2b606ada88b7" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.450232 4858 scope.go:117] "RemoveContainer" containerID="251ca398e74de7732f2cc51f902e5158e1046e03a38fc8c768ee48563f9f231a" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.480616 4858 scope.go:117] "RemoveContainer" containerID="cf8b537af4b8c32c28f7db79176ae05e3ffafea339b9f896e59649f11eba428c" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.509118 4858 scope.go:117] "RemoveContainer" containerID="fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.538176 4858 scope.go:117] "RemoveContainer" containerID="7424937b63e055893b5aae4bd3bd82c0b7a1388a0f97c8f17d97e275fc381ff3" Nov 22 07:49:28 crc kubenswrapper[4858]: E1122 07:49:28.541638 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff\": container with ID starting with fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff not found: ID does not exist" containerID="fb4079899e7326258ece2c125efa6457958d78ce1d433ec3f49412a06aa752ff" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.594999 4858 scope.go:117] "RemoveContainer" containerID="81249119f01504e2f75136ccaa20d76ad79562ce6c4c032f420d15e3ac22cfbc" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.614412 4858 scope.go:117] "RemoveContainer" containerID="aa40f6ffa3b5047db31ed930a0581a3ab393038f8637f6aa84f0906dfaa6ab25" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.651606 4858 scope.go:117] "RemoveContainer" containerID="e45ad93bdcfb7d3ad64ecf2597dcbec1ee1f0bb2ff160f3cd3b89c57ee80f12d" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.676862 4858 scope.go:117] "RemoveContainer" containerID="39fde520f058b73ce73c8fd11a8bfa24e055a38211b694c36194ba6867caba1b" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.702133 4858 scope.go:117] "RemoveContainer" containerID="749e7f842b25d66763df85de32ae258db50242c10ba859d9ba25bc43fedc493f" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.724649 4858 scope.go:117] "RemoveContainer" containerID="6f22965b9d245713fae3ab6b040b415aa1ede7b9a460b7408dad4321ecc55b82" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.745836 4858 scope.go:117] "RemoveContainer" containerID="f52562da73839518f25e57d06af939791fd8a1949a98847efb6f708599667a5d" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.772057 4858 scope.go:117] "RemoveContainer" containerID="8762094dad89b147d20563b6fe92d61a77318c8599c070e5c78908fdb39ce0f7" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.816278 4858 scope.go:117] "RemoveContainer" containerID="887a79386e9217424aa800cb35eef23c59e3cf8a8bb1f2591a6932ebabb407b5" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.837276 4858 scope.go:117] "RemoveContainer" containerID="090810ab09f2017d5bdda8d8d4d62aab4310147262017ccc624f46df94502923" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.872881 4858 scope.go:117] "RemoveContainer" containerID="541f8565f916d4fed150459498ecc51ccc608d1c9bc0ed12d6ab3ee39555c0bc" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.899172 4858 scope.go:117] "RemoveContainer" containerID="a35f97adc654f8d53512934ced68b20cadeb39ebe2016eef17d8e1859247bf90" Nov 22 07:49:28 crc kubenswrapper[4858]: I1122 07:49:28.923157 4858 scope.go:117] "RemoveContainer" containerID="6325a643277214e6820d9d23f8b64430ed2c31d44677509064e322f3ad0b9c22" Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.387115 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.387927 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.388259 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.388345 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.389189 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.391283 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.393041 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:49:33 crc kubenswrapper[4858]: E1122 07:49:33.393114 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xbvdl" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.323637 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xbvdl_9794c036-86f4-4fb8-8f69-0918cbbf9bc6/ovs-vswitchd/0.log" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.325412 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.348945 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4955v\" (UniqueName: \"kubernetes.io/projected/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-kube-api-access-4955v\") pod \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349047 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-lib\") pod \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-etc-ovs\") pod \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-lib" (OuterVolumeSpecName: "var-lib") pod "9794c036-86f4-4fb8-8f69-0918cbbf9bc6" (UID: "9794c036-86f4-4fb8-8f69-0918cbbf9bc6"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-log\") pod \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349629 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "9794c036-86f4-4fb8-8f69-0918cbbf9bc6" (UID: "9794c036-86f4-4fb8-8f69-0918cbbf9bc6"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349633 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-run\") pod \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-scripts\") pod \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\" (UID: \"9794c036-86f4-4fb8-8f69-0918cbbf9bc6\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.349923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-run" (OuterVolumeSpecName: "var-run") pod "9794c036-86f4-4fb8-8f69-0918cbbf9bc6" (UID: "9794c036-86f4-4fb8-8f69-0918cbbf9bc6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.352166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-log" (OuterVolumeSpecName: "var-log") pod "9794c036-86f4-4fb8-8f69-0918cbbf9bc6" (UID: "9794c036-86f4-4fb8-8f69-0918cbbf9bc6"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.354070 4858 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.354112 4858 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-log\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.354126 4858 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.354138 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.357972 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-scripts" (OuterVolumeSpecName: "scripts") pod "9794c036-86f4-4fb8-8f69-0918cbbf9bc6" (UID: "9794c036-86f4-4fb8-8f69-0918cbbf9bc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.358679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-kube-api-access-4955v" (OuterVolumeSpecName: "kube-api-access-4955v") pod "9794c036-86f4-4fb8-8f69-0918cbbf9bc6" (UID: "9794c036-86f4-4fb8-8f69-0918cbbf9bc6"). InnerVolumeSpecName "kube-api-access-4955v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.420471 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.455086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") pod \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.455305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn88k\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-kube-api-access-hn88k\") pod \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.455419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.455460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-lock\") pod \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.455570 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-cache\") pod \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\" (UID: \"df9f2ec4-f57a-47a7-94a2-17549e2ed641\") " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.456068 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.456096 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4955v\" (UniqueName: \"kubernetes.io/projected/9794c036-86f4-4fb8-8f69-0918cbbf9bc6-kube-api-access-4955v\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.456785 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-cache" (OuterVolumeSpecName: "cache") pod "df9f2ec4-f57a-47a7-94a2-17549e2ed641" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.459162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-lock" (OuterVolumeSpecName: "lock") pod "df9f2ec4-f57a-47a7-94a2-17549e2ed641" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.461263 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "df9f2ec4-f57a-47a7-94a2-17549e2ed641" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.461397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "df9f2ec4-f57a-47a7-94a2-17549e2ed641" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.462311 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-kube-api-access-hn88k" (OuterVolumeSpecName: "kube-api-access-hn88k") pod "df9f2ec4-f57a-47a7-94a2-17549e2ed641" (UID: "df9f2ec4-f57a-47a7-94a2-17549e2ed641"). InnerVolumeSpecName "kube-api-access-hn88k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.557420 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.557480 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn88k\" (UniqueName: \"kubernetes.io/projected/df9f2ec4-f57a-47a7-94a2-17549e2ed641-kube-api-access-hn88k\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.557529 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.557544 4858 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-lock\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.557554 4858 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df9f2ec4-f57a-47a7-94a2-17549e2ed641-cache\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.579588 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.635770 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xbvdl_9794c036-86f4-4fb8-8f69-0918cbbf9bc6/ovs-vswitchd/0.log" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.637247 4858 generic.go:334] "Generic (PLEG): container finished" podID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" exitCode=137 Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.637340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerDied","Data":"6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105"} Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.637383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xbvdl" event={"ID":"9794c036-86f4-4fb8-8f69-0918cbbf9bc6","Type":"ContainerDied","Data":"60b203ac6d91e2230b67544e2eb62f4dbef70088c9d081f82150040a4d797776"} Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.637404 4858 scope.go:117] "RemoveContainer" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.637542 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xbvdl" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.657431 4858 generic.go:334] "Generic (PLEG): container finished" podID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerID="6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab" exitCode=137 Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.657539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab"} Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.657691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df9f2ec4-f57a-47a7-94a2-17549e2ed641","Type":"ContainerDied","Data":"ae7edd80f218a450bd8bb2175eabf9ca34cccf65815ee7663b10d1e1e7b63945"} Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.659549 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.659749 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.679364 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-xbvdl"] Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.679686 4858 scope.go:117] "RemoveContainer" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.688918 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-xbvdl"] Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.706043 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.713312 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.713342 4858 scope.go:117] "RemoveContainer" containerID="feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.747425 4858 scope.go:117] "RemoveContainer" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" Nov 22 07:49:35 crc kubenswrapper[4858]: E1122 07:49:35.748033 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105\": container with ID starting with 6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105 not found: ID does not exist" containerID="6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.748080 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105"} err="failed to get container status \"6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105\": rpc error: code = NotFound desc = could not find container \"6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105\": container with ID starting with 6aabcbb7c099484ed29c885ef7b8ed18f4e10f2a95194780699e7962dc606105 not found: ID does not exist" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.748113 4858 scope.go:117] "RemoveContainer" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" Nov 22 07:49:35 crc kubenswrapper[4858]: E1122 07:49:35.748414 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581\": container with ID starting with ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 not found: ID does not exist" containerID="ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.748447 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581"} err="failed to get container status \"ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581\": rpc error: code = NotFound desc = could not find container \"ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581\": container with ID starting with ffd50536d3af8f82bc1b57046c774e0c2441d6fa8f703108cad71b01c577a581 not found: ID does not exist" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.748474 4858 scope.go:117] "RemoveContainer" containerID="feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334" Nov 22 07:49:35 crc kubenswrapper[4858]: E1122 07:49:35.749174 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334\": container with ID starting with feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334 not found: ID does not exist" containerID="feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.749235 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334"} err="failed to get container status \"feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334\": rpc error: code = NotFound desc = could not find container \"feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334\": container with ID starting with feb9b62533d4a406cb84ad13ef6b605f450c4ff9824f420d49f8a122e0609334 not found: ID does not exist" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.749256 4858 scope.go:117] "RemoveContainer" containerID="6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.775954 4858 scope.go:117] "RemoveContainer" containerID="db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.799987 4858 scope.go:117] "RemoveContainer" containerID="ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.820868 4858 scope.go:117] "RemoveContainer" containerID="6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.842136 4858 scope.go:117] "RemoveContainer" containerID="d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.863975 4858 scope.go:117] "RemoveContainer" containerID="4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.887991 4858 scope.go:117] "RemoveContainer" containerID="51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.917221 4858 scope.go:117] "RemoveContainer" containerID="93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.940368 4858 scope.go:117] "RemoveContainer" containerID="a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.963004 4858 scope.go:117] "RemoveContainer" containerID="75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.970182 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8d445612-f1b5-47d6-b247-398725d6fe54"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8d445612-f1b5-47d6-b247-398725d6fe54] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8d445612_f1b5_47d6_b247_398725d6fe54.slice" Nov 22 07:49:35 crc kubenswrapper[4858]: E1122 07:49:35.970287 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8d445612-f1b5-47d6-b247-398725d6fe54] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8d445612-f1b5-47d6-b247-398725d6fe54] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8d445612_f1b5_47d6_b247_398725d6fe54.slice" pod="openstack/ovsdbserver-sb-0" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" Nov 22 07:49:35 crc kubenswrapper[4858]: I1122 07:49:35.984462 4858 scope.go:117] "RemoveContainer" containerID="5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.009861 4858 scope.go:117] "RemoveContainer" containerID="c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.033624 4858 scope.go:117] "RemoveContainer" containerID="dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.062441 4858 scope.go:117] "RemoveContainer" containerID="73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.083815 4858 scope.go:117] "RemoveContainer" containerID="49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.104479 4858 scope.go:117] "RemoveContainer" containerID="6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.105293 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab\": container with ID starting with 6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab not found: ID does not exist" containerID="6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.105513 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab"} err="failed to get container status \"6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab\": rpc error: code = NotFound desc = could not find container \"6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab\": container with ID starting with 6161606c252fbf40a2d05f2c30031dccd34d8590fecc8cd5db7552aa761cddab not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.105613 4858 scope.go:117] "RemoveContainer" containerID="db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.106512 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307\": container with ID starting with db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307 not found: ID does not exist" containerID="db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.106606 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307"} err="failed to get container status \"db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307\": rpc error: code = NotFound desc = could not find container \"db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307\": container with ID starting with db120838de20d580e33c33f226aa187a6eb1e751fd6f7fad3e5a48f5dd261307 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.106681 4858 scope.go:117] "RemoveContainer" containerID="ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.107180 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66\": container with ID starting with ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66 not found: ID does not exist" containerID="ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.107233 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66"} err="failed to get container status \"ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66\": rpc error: code = NotFound desc = could not find container \"ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66\": container with ID starting with ddbbd0d02692f1708f9c689f92d92284581c3e63d9eea7caaeb5cd94619baf66 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.107249 4858 scope.go:117] "RemoveContainer" containerID="6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.107815 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3\": container with ID starting with 6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3 not found: ID does not exist" containerID="6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.107894 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3"} err="failed to get container status \"6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3\": rpc error: code = NotFound desc = could not find container \"6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3\": container with ID starting with 6b6fa3f659ab800ce1f43de71a841c49098811f31035817d1f94f9d50eb268c3 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.107952 4858 scope.go:117] "RemoveContainer" containerID="d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.108501 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098\": container with ID starting with d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098 not found: ID does not exist" containerID="d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.108563 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098"} err="failed to get container status \"d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098\": rpc error: code = NotFound desc = could not find container \"d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098\": container with ID starting with d723cced4851932a3fddd97643c57d994089788dcb09ac6ffe74ffdd4349c098 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.108593 4858 scope.go:117] "RemoveContainer" containerID="4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.108867 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600\": container with ID starting with 4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600 not found: ID does not exist" containerID="4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.108901 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600"} err="failed to get container status \"4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600\": rpc error: code = NotFound desc = could not find container \"4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600\": container with ID starting with 4eb596d2019c1356dcd75d700c8ddfe54d01964dc320bba59d0f32259642f600 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.108922 4858 scope.go:117] "RemoveContainer" containerID="51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.110399 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52\": container with ID starting with 51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52 not found: ID does not exist" containerID="51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.110444 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52"} err="failed to get container status \"51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52\": rpc error: code = NotFound desc = could not find container \"51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52\": container with ID starting with 51e033ffc62a8662c8c40134fa2c41b6c00b4696fa3f683f2e3b273cea09fb52 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.110476 4858 scope.go:117] "RemoveContainer" containerID="93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.111303 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4\": container with ID starting with 93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4 not found: ID does not exist" containerID="93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.111437 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4"} err="failed to get container status \"93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4\": rpc error: code = NotFound desc = could not find container \"93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4\": container with ID starting with 93114050774f3340f4c319ab89da89802851a60b64d06b9b02e5467e84e492f4 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.111481 4858 scope.go:117] "RemoveContainer" containerID="a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.112106 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97\": container with ID starting with a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97 not found: ID does not exist" containerID="a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.112163 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97"} err="failed to get container status \"a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97\": rpc error: code = NotFound desc = could not find container \"a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97\": container with ID starting with a37d5be1404aba66fd88fbcfac15013212578f2394d26c20fbae7b5dd01dca97 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.112197 4858 scope.go:117] "RemoveContainer" containerID="75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.112803 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7\": container with ID starting with 75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7 not found: ID does not exist" containerID="75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.112842 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7"} err="failed to get container status \"75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7\": rpc error: code = NotFound desc = could not find container \"75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7\": container with ID starting with 75758c05df485631e7165e3eda3cd862bd9f45df27ea7228da7e8f9df66f71e7 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.112865 4858 scope.go:117] "RemoveContainer" containerID="5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.113162 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f\": container with ID starting with 5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f not found: ID does not exist" containerID="5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.113203 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f"} err="failed to get container status \"5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f\": rpc error: code = NotFound desc = could not find container \"5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f\": container with ID starting with 5bf28e72f5c1ff90df03dec7ddaf1f0181d4b29ff0cf6a6d1ac758bdeda6db6f not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.113225 4858 scope.go:117] "RemoveContainer" containerID="c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.113724 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c\": container with ID starting with c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c not found: ID does not exist" containerID="c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.113802 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c"} err="failed to get container status \"c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c\": rpc error: code = NotFound desc = could not find container \"c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c\": container with ID starting with c4fef287129f49f2769c68c5c457b4e5a1a4db566ef400571fcc2fdc8462132c not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.113862 4858 scope.go:117] "RemoveContainer" containerID="dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.114599 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70\": container with ID starting with dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70 not found: ID does not exist" containerID="dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.114640 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70"} err="failed to get container status \"dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70\": rpc error: code = NotFound desc = could not find container \"dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70\": container with ID starting with dfa9523a4b078544932d901680225d4de5fa2013780073f14ef223d94e84cb70 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.114662 4858 scope.go:117] "RemoveContainer" containerID="73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.114966 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57\": container with ID starting with 73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57 not found: ID does not exist" containerID="73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.114994 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57"} err="failed to get container status \"73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57\": rpc error: code = NotFound desc = could not find container \"73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57\": container with ID starting with 73019366f6fdd43ff05a42e5de8a41d9221c966d6f413a3f7734d4ab7fd50c57 not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.115013 4858 scope.go:117] "RemoveContainer" containerID="49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b" Nov 22 07:49:36 crc kubenswrapper[4858]: E1122 07:49:36.115368 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b\": container with ID starting with 49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b not found: ID does not exist" containerID="49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.115401 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b"} err="failed to get container status \"49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b\": rpc error: code = NotFound desc = could not find container \"49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b\": container with ID starting with 49a84619fa94504b5acf3cedb15be1f64add37c903ca660ba9af56d55ff8e19b not found: ID does not exist" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.670656 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.726382 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:49:36 crc kubenswrapper[4858]: I1122 07:49:36.739033 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:49:37 crc kubenswrapper[4858]: I1122 07:49:37.550640 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d445612-f1b5-47d6-b247-398725d6fe54" path="/var/lib/kubelet/pods/8d445612-f1b5-47d6-b247-398725d6fe54/volumes" Nov 22 07:49:37 crc kubenswrapper[4858]: I1122 07:49:37.551995 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" path="/var/lib/kubelet/pods/9794c036-86f4-4fb8-8f69-0918cbbf9bc6/volumes" Nov 22 07:49:37 crc kubenswrapper[4858]: I1122 07:49:37.552988 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" path="/var/lib/kubelet/pods/df9f2ec4-f57a-47a7-94a2-17549e2ed641/volumes" Nov 22 07:49:42 crc kubenswrapper[4858]: I1122 07:49:42.126954 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd92662c9-980a-41b0-ad01-bbb1cdaf864b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd92662c9-980a-41b0-ad01-bbb1cdaf864b] : Timed out while waiting for systemd to remove kubepods-besteffort-podd92662c9_980a_41b0_ad01_bbb1cdaf864b.slice" Nov 22 07:49:42 crc kubenswrapper[4858]: E1122 07:49:42.127912 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podd92662c9-980a-41b0-ad01-bbb1cdaf864b] : unable to destroy cgroup paths for cgroup [kubepods besteffort podd92662c9-980a-41b0-ad01-bbb1cdaf864b] : Timed out while waiting for systemd to remove kubepods-besteffort-podd92662c9_980a_41b0_ad01_bbb1cdaf864b.slice" pod="openstack/openstack-cell1-galera-0" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" Nov 22 07:49:42 crc kubenswrapper[4858]: I1122 07:49:42.724356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:49:42 crc kubenswrapper[4858]: I1122 07:49:42.770594 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:49:42 crc kubenswrapper[4858]: I1122 07:49:42.777485 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:49:43 crc kubenswrapper[4858]: I1122 07:49:43.545555 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92662c9-980a-41b0-ad01-bbb1cdaf864b" path="/var/lib/kubelet/pods/d92662c9-980a-41b0-ad01-bbb1cdaf864b/volumes" Nov 22 07:49:45 crc kubenswrapper[4858]: I1122 07:49:45.312192 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:49:45 crc kubenswrapper[4858]: I1122 07:49:45.312603 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:50:15 crc kubenswrapper[4858]: I1122 07:50:15.312733 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:50:15 crc kubenswrapper[4858]: I1122 07:50:15.313808 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:50:15 crc kubenswrapper[4858]: I1122 07:50:15.313881 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:50:15 crc kubenswrapper[4858]: I1122 07:50:15.314793 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:50:15 crc kubenswrapper[4858]: I1122 07:50:15.314856 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" gracePeriod=600 Nov 22 07:50:15 crc kubenswrapper[4858]: E1122 07:50:15.968836 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:50:16 crc kubenswrapper[4858]: I1122 07:50:16.053981 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" exitCode=0 Nov 22 07:50:16 crc kubenswrapper[4858]: I1122 07:50:16.054486 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a"} Nov 22 07:50:16 crc kubenswrapper[4858]: I1122 07:50:16.054538 4858 scope.go:117] "RemoveContainer" containerID="f8caeb1a403d03d8804bfa487bf29539e11f1f2a11d9543c3192f5b713edaba0" Nov 22 07:50:16 crc kubenswrapper[4858]: I1122 07:50:16.055189 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:50:16 crc kubenswrapper[4858]: E1122 07:50:16.055497 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.672010 4858 scope.go:117] "RemoveContainer" containerID="ed31a8de8ebda973678facde6b66275df75c33a364706e494d6a7d07aab991ea" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.710902 4858 scope.go:117] "RemoveContainer" containerID="3f5ad003ed82a4b8e9cedea83c84f2a30c9a4de0fec0a69fc9fdc9a61424e182" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.753367 4858 scope.go:117] "RemoveContainer" containerID="6bf2d7b9ad4531e14c9327a6a63588e930346a2e2dcae212eff919b9b5b4719c" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.778688 4858 scope.go:117] "RemoveContainer" containerID="0fc2e8610b309ec2b9325b8a5fb9a64e0de3f594df62b7a0fe26ced79e91e89c" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.813476 4858 scope.go:117] "RemoveContainer" containerID="7353a4588a60ee3d4c43c007a2286febfa005d0de82d84253ef99191853f4d20" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.918223 4858 scope.go:117] "RemoveContainer" containerID="06711e654f6c8f43dfb70d0e3d0cf613ddc8ac0aa5d4281e2d0aea5c99c77349" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.939163 4858 scope.go:117] "RemoveContainer" containerID="111dcca46bd3fcaad0968661bca007da7afb43901445452bb8c21debc1e1efb9" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.972012 4858 scope.go:117] "RemoveContainer" containerID="1dad77e690ec2f712aff447348a272321482ef5f3173abeb7fe25907d4dc4a72" Nov 22 07:50:29 crc kubenswrapper[4858]: I1122 07:50:29.997427 4858 scope.go:117] "RemoveContainer" containerID="7d85dd2bf391a295963c1c04a60ba1230b2aacca17a1680433770b7be5c7e8c8" Nov 22 07:50:30 crc kubenswrapper[4858]: I1122 07:50:30.034256 4858 scope.go:117] "RemoveContainer" containerID="df637c4bab3b1c089c9ad8726c02b0cd45f173fc27bc1d9048018902900124ab" Nov 22 07:50:30 crc kubenswrapper[4858]: I1122 07:50:30.062561 4858 scope.go:117] "RemoveContainer" containerID="e3acbe684a3b1cf56d9ce339047e865b4bf5f7e2b06b06679ba47e5ef77b37e7" Nov 22 07:50:30 crc kubenswrapper[4858]: I1122 07:50:30.088205 4858 scope.go:117] "RemoveContainer" containerID="459ed18256c6e74e65f42b2044fae1a1c6a3d48927d45cffc496a022915a3956" Nov 22 07:50:30 crc kubenswrapper[4858]: I1122 07:50:30.119893 4858 scope.go:117] "RemoveContainer" containerID="f6520daed76b5e870bfc8aa2ee1122860ae7b6539407e7359bd9ae7e3a45b1f7" Nov 22 07:50:30 crc kubenswrapper[4858]: I1122 07:50:30.163704 4858 scope.go:117] "RemoveContainer" containerID="3da885cb1a497446e4704b17b4b8aaf873885fce07483c60700f3f890b5ad6e2" Nov 22 07:50:30 crc kubenswrapper[4858]: I1122 07:50:30.538586 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:50:30 crc kubenswrapper[4858]: E1122 07:50:30.538969 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.908129 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6r5p"] Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909123 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909146 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909172 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909180 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909187 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-reaper" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909195 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-reaper" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909205 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02115b03-d8fe-4334-96d6-cfbde07fd00a" containerName="kube-state-metrics" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909212 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="02115b03-d8fe-4334-96d6-cfbde07fd00a" containerName="kube-state-metrics" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909225 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964bc658-f627-428c-9dbd-dd640e9394bc" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909231 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="964bc658-f627-428c-9dbd-dd640e9394bc" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909240 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909245 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909254 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909260 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909271 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909278 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-api" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909289 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c63759-4028-4b22-acb3-c9c78f9cbfce" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909295 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c63759-4028-4b22-acb3-c9c78f9cbfce" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909302 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b11c1e-be66-4546-bf19-b2a71c05256c" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909309 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b11c1e-be66-4546-bf19-b2a71c05256c" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909337 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="ovn-northd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909343 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="ovn-northd" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909354 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="rsync" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="rsync" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909372 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa36d9bc-2f0d-44bf-97d2-cc8785002875" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909379 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa36d9bc-2f0d-44bf-97d2-cc8785002875" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909388 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909394 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909402 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909407 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909417 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-central-agent" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909423 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-central-agent" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909433 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="openstack-network-exporter" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909439 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="openstack-network-exporter" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909448 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909454 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909462 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909471 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909481 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04a3a5c-6169-4e97-a167-1c168a8d1690" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909488 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04a3a5c-6169-4e97-a167-1c168a8d1690" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909496 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="setup-container" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909501 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="setup-container" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909509 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="rabbitmq" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909515 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="rabbitmq" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909522 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909527 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909534 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909540 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-server" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909549 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909554 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909562 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-expirer" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909567 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-expirer" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909575 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="proxy-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909580 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="proxy-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909591 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909596 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909603 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" containerName="memcached" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909609 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" containerName="memcached" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909617 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909622 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-server" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909631 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909636 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-api" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909644 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-updater" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909650 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-updater" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909658 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909664 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909675 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909680 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-api" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909689 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909694 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909701 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-updater" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909706 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-updater" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909714 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909719 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909728 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909734 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909744 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909750 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909759 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="rabbitmq" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909766 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="rabbitmq" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909771 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465c8e4d-cc9e-406b-8460-41e83f1dfadb" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909776 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="465c8e4d-cc9e-406b-8460-41e83f1dfadb" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909786 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="mysql-bootstrap" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909792 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="mysql-bootstrap" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909798 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909804 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909813 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="setup-container" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909819 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="setup-container" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909829 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909834 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909846 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909851 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909860 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server-init" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909866 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server-init" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909872 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909879 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-server" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909886 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="swift-recon-cron" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909892 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="swift-recon-cron" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909901 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8be274c-bb8a-43d2-8a56-dacb6789d343" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909907 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8be274c-bb8a-43d2-8a56-dacb6789d343" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909916 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909922 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909937 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909949 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909964 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d464fcfc-b91d-45e8-8c90-18083a632351" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909972 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d464fcfc-b91d-45e8-8c90-18083a632351" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.909983 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d4fda9-31aa-46b8-983a-ffa32db2516c" containerName="keystone-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.909991 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d4fda9-31aa-46b8-983a-ffa32db2516c" containerName="keystone-api" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910002 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-notification-agent" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910010 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-notification-agent" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910019 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="sg-core" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910026 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="sg-core" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910038 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910045 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-log" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910057 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="cinder-scheduler" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910064 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="cinder-scheduler" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910072 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="probe" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910079 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="probe" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910091 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="galera" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910099 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="galera" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910111 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910118 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910126 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910132 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910142 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910147 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" Nov 22 07:50:42 crc kubenswrapper[4858]: E1122 07:50:42.910155 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910161 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910312 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="ovn-northd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910367 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910382 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910392 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="swift-recon-cron" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910399 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910407 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="964bc658-f627-428c-9dbd-dd640e9394bc" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910426 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910439 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="af987998-e4fb-4798-aaf5-6cb5f6a4670e" containerName="glance-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910450 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910457 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="proxy-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910468 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec286aa-6594-4e36-b307-c8ffaa0e59de" containerName="galera" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910475 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-updater" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910485 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910493 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa36d9bc-2f0d-44bf-97d2-cc8785002875" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910500 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910509 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c63759-4028-4b22-acb3-c9c78f9cbfce" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910516 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910525 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-central-agent" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910536 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="probe" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910545 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="sg-core" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910553 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910564 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10b7a00-765d-465e-b80e-e795da936e68" containerName="cinder-scheduler" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910589 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="rsync" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910599 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910609 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d1b1fd-682c-499c-8f5b-f22d4513217a" containerName="barbican-keystone-listener-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910620 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-reaper" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910631 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccd3a3a-6077-4b71-a6ac-a9289bb59b98" containerName="ceilometer-notification-agent" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910637 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910646 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8be274c-bb8a-43d2-8a56-dacb6789d343" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910656 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-expirer" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910669 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="02115b03-d8fe-4334-96d6-cfbde07fd00a" containerName="kube-state-metrics" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910679 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910689 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-metadata" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910700 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa777a2-4dd0-407d-b615-34d7fcd0845b" containerName="barbican-worker" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910708 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="465c8e4d-cc9e-406b-8460-41e83f1dfadb" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910717 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-httpd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910725 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d4fda9-31aa-46b8-983a-ffa32db2516c" containerName="keystone-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910735 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4e5cb5-ebc0-4cec-a53e-452efc26731b" containerName="placement-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910744 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovs-vswitchd" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910753 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910762 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b11c1e-be66-4546-bf19-b2a71c05256c" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910774 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9794c036-86f4-4fb8-8f69-0918cbbf9bc6" containerName="ovsdb-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910782 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa57087-ec21-4cff-aa47-68358e8f5039" containerName="cinder-api-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910789 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4127577-b995-4dfb-95d8-e089acc50fc9" containerName="glance-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910800 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9906e22d-4a3b-4ab7-86b7-2944b6af0f34" containerName="memcached" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910811 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4636a7e4-bda9-4b76-91ab-87ed6e121b50" containerName="ovn-controller" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910828 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="container-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910839 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27a55dc-71d3-468f-b503-8436883c2771" containerName="barbican-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910849 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9023aa66-975c-44c6-8aba-cff06211fd31" containerName="nova-metadata-log" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910859 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1176a9-f83c-4c6e-8436-60b9affe0857" containerName="nova-api-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910871 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb1a203-c5d9-4ba5-b31b-c6134963af46" containerName="rabbitmq" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910882 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-updater" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910893 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a92d321-46e4-4291-8ac3-fc8f039b3dcf" containerName="rabbitmq" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-server" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910912 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="account-replicator" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910924 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb2d6c0-fce8-4356-a3c5-5b1cd6c23bb2" containerName="openstack-network-exporter" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910933 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04a3a5c-6169-4e97-a167-1c168a8d1690" containerName="mariadb-account-delete" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910940 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="555cf9f2-a18e-4b84-b360-d03c7e0d0821" containerName="neutron-api" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910952 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9f2ec4-f57a-47a7-94a2-17549e2ed641" containerName="object-auditor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.910963 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d464fcfc-b91d-45e8-8c90-18083a632351" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.913573 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:42 crc kubenswrapper[4858]: I1122 07:50:42.920552 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6r5p"] Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.049955 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-utilities\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.050086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-catalog-content\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.050128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2rw\" (UniqueName: \"kubernetes.io/projected/77d240bd-aa71-4a98-99a8-243cb65198f9-kube-api-access-ds2rw\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.152000 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-catalog-content\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.152089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds2rw\" (UniqueName: \"kubernetes.io/projected/77d240bd-aa71-4a98-99a8-243cb65198f9-kube-api-access-ds2rw\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.152157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-utilities\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.152766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-catalog-content\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.153000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-utilities\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.177752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds2rw\" (UniqueName: \"kubernetes.io/projected/77d240bd-aa71-4a98-99a8-243cb65198f9-kube-api-access-ds2rw\") pod \"redhat-marketplace-k6r5p\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.243829 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.536507 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:50:43 crc kubenswrapper[4858]: E1122 07:50:43.537081 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:50:43 crc kubenswrapper[4858]: I1122 07:50:43.726814 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6r5p"] Nov 22 07:50:44 crc kubenswrapper[4858]: I1122 07:50:44.384255 4858 generic.go:334] "Generic (PLEG): container finished" podID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerID="251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d" exitCode=0 Nov 22 07:50:44 crc kubenswrapper[4858]: I1122 07:50:44.384333 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerDied","Data":"251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d"} Nov 22 07:50:44 crc kubenswrapper[4858]: I1122 07:50:44.384778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerStarted","Data":"5d52d451dd09d5f2f84883e936eb6998aa5d87b5f3fcfe5976839917dcea699e"} Nov 22 07:50:46 crc kubenswrapper[4858]: I1122 07:50:46.406248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerStarted","Data":"659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7"} Nov 22 07:50:47 crc kubenswrapper[4858]: I1122 07:50:47.417951 4858 generic.go:334] "Generic (PLEG): container finished" podID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerID="659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7" exitCode=0 Nov 22 07:50:47 crc kubenswrapper[4858]: I1122 07:50:47.418028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerDied","Data":"659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7"} Nov 22 07:50:48 crc kubenswrapper[4858]: I1122 07:50:48.430134 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerStarted","Data":"2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb"} Nov 22 07:50:48 crc kubenswrapper[4858]: I1122 07:50:48.448442 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6r5p" podStartSLOduration=2.899296839 podStartE2EDuration="6.448364804s" podCreationTimestamp="2025-11-22 07:50:42 +0000 UTC" firstStartedPulling="2025-11-22 07:50:44.38688359 +0000 UTC m=+2406.228306596" lastFinishedPulling="2025-11-22 07:50:47.935951555 +0000 UTC m=+2409.777374561" observedRunningTime="2025-11-22 07:50:48.446520485 +0000 UTC m=+2410.287943501" watchObservedRunningTime="2025-11-22 07:50:48.448364804 +0000 UTC m=+2410.289787820" Nov 22 07:50:53 crc kubenswrapper[4858]: I1122 07:50:53.243965 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:53 crc kubenswrapper[4858]: I1122 07:50:53.245159 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:53 crc kubenswrapper[4858]: I1122 07:50:53.290756 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:53 crc kubenswrapper[4858]: I1122 07:50:53.514250 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:54 crc kubenswrapper[4858]: I1122 07:50:54.535560 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:50:54 crc kubenswrapper[4858]: E1122 07:50:54.536031 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:50:54 crc kubenswrapper[4858]: I1122 07:50:54.696310 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6r5p"] Nov 22 07:50:56 crc kubenswrapper[4858]: I1122 07:50:56.490728 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k6r5p" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="registry-server" containerID="cri-o://2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb" gracePeriod=2 Nov 22 07:50:56 crc kubenswrapper[4858]: I1122 07:50:56.927411 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.061578 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-utilities\") pod \"77d240bd-aa71-4a98-99a8-243cb65198f9\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.061750 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-catalog-content\") pod \"77d240bd-aa71-4a98-99a8-243cb65198f9\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.061819 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds2rw\" (UniqueName: \"kubernetes.io/projected/77d240bd-aa71-4a98-99a8-243cb65198f9-kube-api-access-ds2rw\") pod \"77d240bd-aa71-4a98-99a8-243cb65198f9\" (UID: \"77d240bd-aa71-4a98-99a8-243cb65198f9\") " Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.062622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-utilities" (OuterVolumeSpecName: "utilities") pod "77d240bd-aa71-4a98-99a8-243cb65198f9" (UID: "77d240bd-aa71-4a98-99a8-243cb65198f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.068893 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d240bd-aa71-4a98-99a8-243cb65198f9-kube-api-access-ds2rw" (OuterVolumeSpecName: "kube-api-access-ds2rw") pod "77d240bd-aa71-4a98-99a8-243cb65198f9" (UID: "77d240bd-aa71-4a98-99a8-243cb65198f9"). InnerVolumeSpecName "kube-api-access-ds2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.085703 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77d240bd-aa71-4a98-99a8-243cb65198f9" (UID: "77d240bd-aa71-4a98-99a8-243cb65198f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.163748 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.163795 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77d240bd-aa71-4a98-99a8-243cb65198f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.163809 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds2rw\" (UniqueName: \"kubernetes.io/projected/77d240bd-aa71-4a98-99a8-243cb65198f9-kube-api-access-ds2rw\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.500902 4858 generic.go:334] "Generic (PLEG): container finished" podID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerID="2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb" exitCode=0 Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.500966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerDied","Data":"2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb"} Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.501004 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6r5p" event={"ID":"77d240bd-aa71-4a98-99a8-243cb65198f9","Type":"ContainerDied","Data":"5d52d451dd09d5f2f84883e936eb6998aa5d87b5f3fcfe5976839917dcea699e"} Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.501026 4858 scope.go:117] "RemoveContainer" containerID="2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.501026 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6r5p" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.531233 4858 scope.go:117] "RemoveContainer" containerID="659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.547697 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6r5p"] Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.547753 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6r5p"] Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.573208 4858 scope.go:117] "RemoveContainer" containerID="251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.593689 4858 scope.go:117] "RemoveContainer" containerID="2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb" Nov 22 07:50:57 crc kubenswrapper[4858]: E1122 07:50:57.594168 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb\": container with ID starting with 2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb not found: ID does not exist" containerID="2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.594210 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb"} err="failed to get container status \"2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb\": rpc error: code = NotFound desc = could not find container \"2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb\": container with ID starting with 2ebb21cabb2dfd284c264b013a226542c2560cb2c6ac15c19bf189544b70d5fb not found: ID does not exist" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.594240 4858 scope.go:117] "RemoveContainer" containerID="659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7" Nov 22 07:50:57 crc kubenswrapper[4858]: E1122 07:50:57.594702 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7\": container with ID starting with 659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7 not found: ID does not exist" containerID="659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.594741 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7"} err="failed to get container status \"659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7\": rpc error: code = NotFound desc = could not find container \"659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7\": container with ID starting with 659e0f700723bf6fcd2d705f5d529d884d74fcd1dd2121b5cbbc85004a5928d7 not found: ID does not exist" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.594799 4858 scope.go:117] "RemoveContainer" containerID="251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d" Nov 22 07:50:57 crc kubenswrapper[4858]: E1122 07:50:57.595154 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d\": container with ID starting with 251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d not found: ID does not exist" containerID="251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d" Nov 22 07:50:57 crc kubenswrapper[4858]: I1122 07:50:57.595184 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d"} err="failed to get container status \"251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d\": rpc error: code = NotFound desc = could not find container \"251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d\": container with ID starting with 251a008998e09fefd0780d4896ffd8bbb35aa3c8a76b5c740b0fb559f7eab83d not found: ID does not exist" Nov 22 07:50:59 crc kubenswrapper[4858]: I1122 07:50:59.583493 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" path="/var/lib/kubelet/pods/77d240bd-aa71-4a98-99a8-243cb65198f9/volumes" Nov 22 07:51:07 crc kubenswrapper[4858]: I1122 07:51:07.536420 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:51:07 crc kubenswrapper[4858]: E1122 07:51:07.541141 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:51:22 crc kubenswrapper[4858]: I1122 07:51:22.536134 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:51:22 crc kubenswrapper[4858]: E1122 07:51:22.537399 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:51:30 crc kubenswrapper[4858]: I1122 07:51:30.451852 4858 scope.go:117] "RemoveContainer" containerID="a264ec2d1761e844139d64f8cfd921295756c1e25bdc2ca727b8eecd6b023c10" Nov 22 07:51:30 crc kubenswrapper[4858]: I1122 07:51:30.485138 4858 scope.go:117] "RemoveContainer" containerID="9a6bd0f287f81f32a2ecd007606ab984ffaa840e52edd920e197cf1530362f85" Nov 22 07:51:30 crc kubenswrapper[4858]: I1122 07:51:30.527523 4858 scope.go:117] "RemoveContainer" containerID="ab0829a2a45dd01e2464217508ca78a6a10b04e91b998fe9def047ef8aebbd38" Nov 22 07:51:30 crc kubenswrapper[4858]: I1122 07:51:30.551789 4858 scope.go:117] "RemoveContainer" containerID="98e56862b8436374df28d433ceba2eba7598bc78c7a8982ea0f1f152b99d551a" Nov 22 07:51:30 crc kubenswrapper[4858]: I1122 07:51:30.577661 4858 scope.go:117] "RemoveContainer" containerID="9fa0d715445d9cabd5993deddac4cf06600dfcb8a11d1fc5d81fa7dadce6684f" Nov 22 07:51:34 crc kubenswrapper[4858]: I1122 07:51:34.536164 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:51:34 crc kubenswrapper[4858]: E1122 07:51:34.536743 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:51:47 crc kubenswrapper[4858]: I1122 07:51:47.536237 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:51:47 crc kubenswrapper[4858]: E1122 07:51:47.536984 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:52:02 crc kubenswrapper[4858]: I1122 07:52:02.535281 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:52:02 crc kubenswrapper[4858]: E1122 07:52:02.535954 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:52:17 crc kubenswrapper[4858]: I1122 07:52:17.536167 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:52:17 crc kubenswrapper[4858]: E1122 07:52:17.537180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:52:30 crc kubenswrapper[4858]: I1122 07:52:30.714060 4858 scope.go:117] "RemoveContainer" containerID="df6351eb07779190404e7510c779e428984d0ff82f2f65b8c045ff400d0f540b" Nov 22 07:52:30 crc kubenswrapper[4858]: I1122 07:52:30.745419 4858 scope.go:117] "RemoveContainer" containerID="51604b1dd7eceb22876c5f2824f93728dd6ccb3368e18bfb5bdbfd78f9ae8589" Nov 22 07:52:30 crc kubenswrapper[4858]: I1122 07:52:30.770706 4858 scope.go:117] "RemoveContainer" containerID="a0adda39f79e6c29822139189a3c320fe6ee86b411f22c91f3e5eaceb048c381" Nov 22 07:52:30 crc kubenswrapper[4858]: I1122 07:52:30.799924 4858 scope.go:117] "RemoveContainer" containerID="358f5eea1c33599a6ff9d0f49219f36c9849f142f1d83d32c74db35d272f5419" Nov 22 07:52:30 crc kubenswrapper[4858]: I1122 07:52:30.832863 4858 scope.go:117] "RemoveContainer" containerID="16930abb64b29909bb858a278fc5b86a9cc7607ab57cf00aae8bb400015451f7" Nov 22 07:52:31 crc kubenswrapper[4858]: I1122 07:52:31.537014 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:52:31 crc kubenswrapper[4858]: E1122 07:52:31.537576 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:52:46 crc kubenswrapper[4858]: I1122 07:52:46.535784 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:52:46 crc kubenswrapper[4858]: E1122 07:52:46.536484 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:52:57 crc kubenswrapper[4858]: I1122 07:52:57.536170 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:52:57 crc kubenswrapper[4858]: E1122 07:52:57.537136 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:53:10 crc kubenswrapper[4858]: I1122 07:53:10.536453 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:53:10 crc kubenswrapper[4858]: E1122 07:53:10.537347 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:53:21 crc kubenswrapper[4858]: I1122 07:53:21.535429 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:53:21 crc kubenswrapper[4858]: E1122 07:53:21.536063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:53:30 crc kubenswrapper[4858]: I1122 07:53:30.966752 4858 scope.go:117] "RemoveContainer" containerID="e08b7a1f8e2e8f5bdd733d2f70df309fb24c38853d5d408bf801b16aee9f17da" Nov 22 07:53:36 crc kubenswrapper[4858]: I1122 07:53:36.536312 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:53:36 crc kubenswrapper[4858]: E1122 07:53:36.537222 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:53:51 crc kubenswrapper[4858]: I1122 07:53:51.536506 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:53:51 crc kubenswrapper[4858]: E1122 07:53:51.537393 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:54:04 crc kubenswrapper[4858]: I1122 07:54:04.535665 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:54:04 crc kubenswrapper[4858]: E1122 07:54:04.536995 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:54:18 crc kubenswrapper[4858]: I1122 07:54:18.536066 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:54:18 crc kubenswrapper[4858]: E1122 07:54:18.536888 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:54:29 crc kubenswrapper[4858]: I1122 07:54:29.541215 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:54:29 crc kubenswrapper[4858]: E1122 07:54:29.544579 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.047754 4858 scope.go:117] "RemoveContainer" containerID="5dae7ef1cf0b3974032face8f70aee5fa5e4c4f2e7d4ca85f75144f7a600b8fc" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.070178 4858 scope.go:117] "RemoveContainer" containerID="c5ed51b8583e97f2df4a7b4d36a5dee9f21c7fa973fc8d4bfdf95afaa4f89084" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.087276 4858 scope.go:117] "RemoveContainer" containerID="a781dbc0e48e09fab41130a13b423bf6eab57d04da347dc3c059feb78f08659a" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.105780 4858 scope.go:117] "RemoveContainer" containerID="85a50d90c74f2b5f201ff660a1988de49cf64fafadb3d13bf6855ce5e7b51da3" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.155250 4858 scope.go:117] "RemoveContainer" containerID="ba14a6eadf4f6ecaaaac7e03e75a0670b78a68e6d491fb4484cc6fca27e15f36" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.186504 4858 scope.go:117] "RemoveContainer" containerID="1bb4c806d1cb05f1f60e1ba8c1208b2be55a1159bd9469a09e52ee36372aa442" Nov 22 07:54:31 crc kubenswrapper[4858]: I1122 07:54:31.205289 4858 scope.go:117] "RemoveContainer" containerID="d2fce1b7f44ee254502c1ee4737ddad02ab713e7ede13cb487c2720cd88d281e" Nov 22 07:54:40 crc kubenswrapper[4858]: I1122 07:54:40.535506 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:54:40 crc kubenswrapper[4858]: E1122 07:54:40.536227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:54:55 crc kubenswrapper[4858]: I1122 07:54:55.536039 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:54:55 crc kubenswrapper[4858]: E1122 07:54:55.536893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:55:10 crc kubenswrapper[4858]: I1122 07:55:10.535722 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:55:10 crc kubenswrapper[4858]: E1122 07:55:10.536437 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 07:55:24 crc kubenswrapper[4858]: I1122 07:55:24.536492 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:55:25 crc kubenswrapper[4858]: I1122 07:55:25.604822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"dcbe0d32b87589e2a737ce0d00303efbb3bc376344bcfa93706f1eaa597b064a"} Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.303486 4858 scope.go:117] "RemoveContainer" containerID="fba205941defd92fcd251d1c2b531399282991083a611c7e300acf8909c975a6" Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.324033 4858 scope.go:117] "RemoveContainer" containerID="95736b04b771d7768eb3f3b40cbcad3bbfcc5992261841d7f094e34a12830692" Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.354555 4858 scope.go:117] "RemoveContainer" containerID="e3680cb319e6b254d9fb55c5079fa27ee9c17bc3d07f92905d53af9f7a03083e" Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.381026 4858 scope.go:117] "RemoveContainer" containerID="5e06a2e54f9ce93dc2ccfd9061b8c1c351688721b54e58b2245aca1b06036b6b" Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.404359 4858 scope.go:117] "RemoveContainer" containerID="edf1bfbecea443bbc7e129c90a394a31314a390042585bb52ca713b177380f29" Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.433648 4858 scope.go:117] "RemoveContainer" containerID="81422c98867143038ef4ffa6c2f72f05f237ab29f232ccca07fb76aa145ecc3f" Nov 22 07:55:31 crc kubenswrapper[4858]: I1122 07:55:31.468423 4858 scope.go:117] "RemoveContainer" containerID="c5f22872e946765c3b927d5609c7ae86097005d9299f538ec9bec6ac660eef39" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.682873 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f2pt8"] Nov 22 07:57:27 crc kubenswrapper[4858]: E1122 07:57:27.683788 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="registry-server" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.683804 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="registry-server" Nov 22 07:57:27 crc kubenswrapper[4858]: E1122 07:57:27.683817 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="extract-utilities" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.683825 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="extract-utilities" Nov 22 07:57:27 crc kubenswrapper[4858]: E1122 07:57:27.683831 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="extract-content" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.683840 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="extract-content" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.684021 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d240bd-aa71-4a98-99a8-243cb65198f9" containerName="registry-server" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.685148 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.712406 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2pt8"] Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.756272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-utilities\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.756334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-catalog-content\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.756405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcng2\" (UniqueName: \"kubernetes.io/projected/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-kube-api-access-rcng2\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.857404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-utilities\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.857487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-catalog-content\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.857546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcng2\" (UniqueName: \"kubernetes.io/projected/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-kube-api-access-rcng2\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.857961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-utilities\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.858451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-catalog-content\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:27 crc kubenswrapper[4858]: I1122 07:57:27.882866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcng2\" (UniqueName: \"kubernetes.io/projected/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-kube-api-access-rcng2\") pod \"community-operators-f2pt8\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:28 crc kubenswrapper[4858]: I1122 07:57:28.003044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:28 crc kubenswrapper[4858]: I1122 07:57:28.426626 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2pt8"] Nov 22 07:57:28 crc kubenswrapper[4858]: I1122 07:57:28.869302 4858 generic.go:334] "Generic (PLEG): container finished" podID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerID="21e57da4a018ea0bcfc220feaba866019b0cbc3ff455288b773183b815312f50" exitCode=0 Nov 22 07:57:28 crc kubenswrapper[4858]: I1122 07:57:28.869608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerDied","Data":"21e57da4a018ea0bcfc220feaba866019b0cbc3ff455288b773183b815312f50"} Nov 22 07:57:28 crc kubenswrapper[4858]: I1122 07:57:28.869640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerStarted","Data":"ee558d32007d442eaf663196b44d397a2904fcf6bc59e19fc4b4b2d86c4ad750"} Nov 22 07:57:28 crc kubenswrapper[4858]: I1122 07:57:28.872465 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:57:29 crc kubenswrapper[4858]: I1122 07:57:29.880974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerStarted","Data":"f32bef7861c868ed04b868ac47bd179f78548dc011bf408126f932caae712307"} Nov 22 07:57:30 crc kubenswrapper[4858]: I1122 07:57:30.893008 4858 generic.go:334] "Generic (PLEG): container finished" podID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerID="f32bef7861c868ed04b868ac47bd179f78548dc011bf408126f932caae712307" exitCode=0 Nov 22 07:57:30 crc kubenswrapper[4858]: I1122 07:57:30.893071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerDied","Data":"f32bef7861c868ed04b868ac47bd179f78548dc011bf408126f932caae712307"} Nov 22 07:57:31 crc kubenswrapper[4858]: I1122 07:57:31.904761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerStarted","Data":"8ea0b1c9167287204a2de7411ad5a8171c8fe4abb28ec72ca50597fab1fc87ae"} Nov 22 07:57:31 crc kubenswrapper[4858]: I1122 07:57:31.924368 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f2pt8" podStartSLOduration=2.207697611 podStartE2EDuration="4.924340302s" podCreationTimestamp="2025-11-22 07:57:27 +0000 UTC" firstStartedPulling="2025-11-22 07:57:28.872165759 +0000 UTC m=+2810.713588765" lastFinishedPulling="2025-11-22 07:57:31.58880845 +0000 UTC m=+2813.430231456" observedRunningTime="2025-11-22 07:57:31.922917817 +0000 UTC m=+2813.764340843" watchObservedRunningTime="2025-11-22 07:57:31.924340302 +0000 UTC m=+2813.765763318" Nov 22 07:57:38 crc kubenswrapper[4858]: I1122 07:57:38.004504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:38 crc kubenswrapper[4858]: I1122 07:57:38.005034 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:38 crc kubenswrapper[4858]: I1122 07:57:38.053067 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:39 crc kubenswrapper[4858]: I1122 07:57:39.008662 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:39 crc kubenswrapper[4858]: I1122 07:57:39.058205 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f2pt8"] Nov 22 07:57:40 crc kubenswrapper[4858]: I1122 07:57:40.976018 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f2pt8" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="registry-server" containerID="cri-o://8ea0b1c9167287204a2de7411ad5a8171c8fe4abb28ec72ca50597fab1fc87ae" gracePeriod=2 Nov 22 07:57:41 crc kubenswrapper[4858]: I1122 07:57:41.987649 4858 generic.go:334] "Generic (PLEG): container finished" podID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerID="8ea0b1c9167287204a2de7411ad5a8171c8fe4abb28ec72ca50597fab1fc87ae" exitCode=0 Nov 22 07:57:41 crc kubenswrapper[4858]: I1122 07:57:41.988201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerDied","Data":"8ea0b1c9167287204a2de7411ad5a8171c8fe4abb28ec72ca50597fab1fc87ae"} Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.200024 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.316759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcng2\" (UniqueName: \"kubernetes.io/projected/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-kube-api-access-rcng2\") pod \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.316908 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-utilities\") pod \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.316991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-catalog-content\") pod \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\" (UID: \"d869f633-1b2e-4748-b6c6-a1a2f29b72bc\") " Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.318117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-utilities" (OuterVolumeSpecName: "utilities") pod "d869f633-1b2e-4748-b6c6-a1a2f29b72bc" (UID: "d869f633-1b2e-4748-b6c6-a1a2f29b72bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.329602 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-kube-api-access-rcng2" (OuterVolumeSpecName: "kube-api-access-rcng2") pod "d869f633-1b2e-4748-b6c6-a1a2f29b72bc" (UID: "d869f633-1b2e-4748-b6c6-a1a2f29b72bc"). InnerVolumeSpecName "kube-api-access-rcng2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.375899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d869f633-1b2e-4748-b6c6-a1a2f29b72bc" (UID: "d869f633-1b2e-4748-b6c6-a1a2f29b72bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.418871 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.418929 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcng2\" (UniqueName: \"kubernetes.io/projected/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-kube-api-access-rcng2\") on node \"crc\" DevicePath \"\"" Nov 22 07:57:42 crc kubenswrapper[4858]: I1122 07:57:42.418945 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d869f633-1b2e-4748-b6c6-a1a2f29b72bc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.000846 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2pt8" event={"ID":"d869f633-1b2e-4748-b6c6-a1a2f29b72bc","Type":"ContainerDied","Data":"ee558d32007d442eaf663196b44d397a2904fcf6bc59e19fc4b4b2d86c4ad750"} Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.000923 4858 scope.go:117] "RemoveContainer" containerID="8ea0b1c9167287204a2de7411ad5a8171c8fe4abb28ec72ca50597fab1fc87ae" Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.000968 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2pt8" Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.042209 4858 scope.go:117] "RemoveContainer" containerID="f32bef7861c868ed04b868ac47bd179f78548dc011bf408126f932caae712307" Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.045578 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f2pt8"] Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.053263 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f2pt8"] Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.077585 4858 scope.go:117] "RemoveContainer" containerID="21e57da4a018ea0bcfc220feaba866019b0cbc3ff455288b773183b815312f50" Nov 22 07:57:43 crc kubenswrapper[4858]: I1122 07:57:43.546088 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" path="/var/lib/kubelet/pods/d869f633-1b2e-4748-b6c6-a1a2f29b72bc/volumes" Nov 22 07:57:45 crc kubenswrapper[4858]: I1122 07:57:45.312081 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:57:45 crc kubenswrapper[4858]: I1122 07:57:45.312483 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.632420 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zj544"] Nov 22 07:58:01 crc kubenswrapper[4858]: E1122 07:58:01.633389 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="registry-server" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.633409 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="registry-server" Nov 22 07:58:01 crc kubenswrapper[4858]: E1122 07:58:01.633437 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="extract-utilities" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.633448 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="extract-utilities" Nov 22 07:58:01 crc kubenswrapper[4858]: E1122 07:58:01.633468 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="extract-content" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.633476 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="extract-content" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.633665 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d869f633-1b2e-4748-b6c6-a1a2f29b72bc" containerName="registry-server" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.634963 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.652689 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zj544"] Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.692219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-utilities\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.692297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-catalog-content\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.692424 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbb6q\" (UniqueName: \"kubernetes.io/projected/8ac3cd25-37ed-497b-973f-d236fbac1b3f-kube-api-access-mbb6q\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.793901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-utilities\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.793977 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-catalog-content\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.794045 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbb6q\" (UniqueName: \"kubernetes.io/projected/8ac3cd25-37ed-497b-973f-d236fbac1b3f-kube-api-access-mbb6q\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.794709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-utilities\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.794830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-catalog-content\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.817655 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbb6q\" (UniqueName: \"kubernetes.io/projected/8ac3cd25-37ed-497b-973f-d236fbac1b3f-kube-api-access-mbb6q\") pod \"redhat-operators-zj544\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:01 crc kubenswrapper[4858]: I1122 07:58:01.958282 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:02 crc kubenswrapper[4858]: I1122 07:58:02.456997 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zj544"] Nov 22 07:58:03 crc kubenswrapper[4858]: I1122 07:58:03.161903 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerID="a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff" exitCode=0 Nov 22 07:58:03 crc kubenswrapper[4858]: I1122 07:58:03.162025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj544" event={"ID":"8ac3cd25-37ed-497b-973f-d236fbac1b3f","Type":"ContainerDied","Data":"a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff"} Nov 22 07:58:03 crc kubenswrapper[4858]: I1122 07:58:03.162263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj544" event={"ID":"8ac3cd25-37ed-497b-973f-d236fbac1b3f","Type":"ContainerStarted","Data":"3e9451e06d6a0eeebf7904ad34348dcdc5da1ccb10ddf00264d17d5d6fe7c1a4"} Nov 22 07:58:05 crc kubenswrapper[4858]: I1122 07:58:05.183443 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerID="762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7" exitCode=0 Nov 22 07:58:05 crc kubenswrapper[4858]: I1122 07:58:05.183621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj544" event={"ID":"8ac3cd25-37ed-497b-973f-d236fbac1b3f","Type":"ContainerDied","Data":"762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7"} Nov 22 07:58:06 crc kubenswrapper[4858]: I1122 07:58:06.202776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj544" event={"ID":"8ac3cd25-37ed-497b-973f-d236fbac1b3f","Type":"ContainerStarted","Data":"0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1"} Nov 22 07:58:06 crc kubenswrapper[4858]: I1122 07:58:06.227152 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zj544" podStartSLOduration=2.731633941 podStartE2EDuration="5.227128606s" podCreationTimestamp="2025-11-22 07:58:01 +0000 UTC" firstStartedPulling="2025-11-22 07:58:03.163504677 +0000 UTC m=+2845.004927683" lastFinishedPulling="2025-11-22 07:58:05.658999342 +0000 UTC m=+2847.500422348" observedRunningTime="2025-11-22 07:58:06.219503022 +0000 UTC m=+2848.060926038" watchObservedRunningTime="2025-11-22 07:58:06.227128606 +0000 UTC m=+2848.068551612" Nov 22 07:58:11 crc kubenswrapper[4858]: I1122 07:58:11.959339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:11 crc kubenswrapper[4858]: I1122 07:58:11.959988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:12 crc kubenswrapper[4858]: I1122 07:58:12.009295 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:12 crc kubenswrapper[4858]: I1122 07:58:12.293514 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:12 crc kubenswrapper[4858]: I1122 07:58:12.338505 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zj544"] Nov 22 07:58:14 crc kubenswrapper[4858]: I1122 07:58:14.270516 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zj544" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="registry-server" containerID="cri-o://0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1" gracePeriod=2 Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.200248 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.280903 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerID="0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1" exitCode=0 Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.280991 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj544" event={"ID":"8ac3cd25-37ed-497b-973f-d236fbac1b3f","Type":"ContainerDied","Data":"0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1"} Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.281566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj544" event={"ID":"8ac3cd25-37ed-497b-973f-d236fbac1b3f","Type":"ContainerDied","Data":"3e9451e06d6a0eeebf7904ad34348dcdc5da1ccb10ddf00264d17d5d6fe7c1a4"} Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.281601 4858 scope.go:117] "RemoveContainer" containerID="0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.281075 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj544" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.301612 4858 scope.go:117] "RemoveContainer" containerID="762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.312645 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.312709 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.321482 4858 scope.go:117] "RemoveContainer" containerID="a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.347286 4858 scope.go:117] "RemoveContainer" containerID="0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1" Nov 22 07:58:15 crc kubenswrapper[4858]: E1122 07:58:15.347909 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1\": container with ID starting with 0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1 not found: ID does not exist" containerID="0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.347957 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1"} err="failed to get container status \"0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1\": rpc error: code = NotFound desc = could not find container \"0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1\": container with ID starting with 0959e31f2f334c3b8a8ca267ad5c12a741afe90aee45e79c2fb03d2868428ae1 not found: ID does not exist" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.348008 4858 scope.go:117] "RemoveContainer" containerID="762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7" Nov 22 07:58:15 crc kubenswrapper[4858]: E1122 07:58:15.348443 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7\": container with ID starting with 762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7 not found: ID does not exist" containerID="762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.348499 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7"} err="failed to get container status \"762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7\": rpc error: code = NotFound desc = could not find container \"762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7\": container with ID starting with 762b422617a4d9b39ed0efac54d87b4735898a2dc53f7ec24f7e94e35750eea7 not found: ID does not exist" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.348537 4858 scope.go:117] "RemoveContainer" containerID="a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff" Nov 22 07:58:15 crc kubenswrapper[4858]: E1122 07:58:15.349000 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff\": container with ID starting with a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff not found: ID does not exist" containerID="a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.349038 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff"} err="failed to get container status \"a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff\": rpc error: code = NotFound desc = could not find container \"a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff\": container with ID starting with a8f91f254ca9b70847c349ad89724ff9422309f79ca7276d5f2b245aa4c87cff not found: ID does not exist" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.387294 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-utilities\") pod \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.387426 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbb6q\" (UniqueName: \"kubernetes.io/projected/8ac3cd25-37ed-497b-973f-d236fbac1b3f-kube-api-access-mbb6q\") pod \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.387520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-catalog-content\") pod \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\" (UID: \"8ac3cd25-37ed-497b-973f-d236fbac1b3f\") " Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.388309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-utilities" (OuterVolumeSpecName: "utilities") pod "8ac3cd25-37ed-497b-973f-d236fbac1b3f" (UID: "8ac3cd25-37ed-497b-973f-d236fbac1b3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.392603 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac3cd25-37ed-497b-973f-d236fbac1b3f-kube-api-access-mbb6q" (OuterVolumeSpecName: "kube-api-access-mbb6q") pod "8ac3cd25-37ed-497b-973f-d236fbac1b3f" (UID: "8ac3cd25-37ed-497b-973f-d236fbac1b3f"). InnerVolumeSpecName "kube-api-access-mbb6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.489384 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:15 crc kubenswrapper[4858]: I1122 07:58:15.489437 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbb6q\" (UniqueName: \"kubernetes.io/projected/8ac3cd25-37ed-497b-973f-d236fbac1b3f-kube-api-access-mbb6q\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:17 crc kubenswrapper[4858]: I1122 07:58:17.925896 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ac3cd25-37ed-497b-973f-d236fbac1b3f" (UID: "8ac3cd25-37ed-497b-973f-d236fbac1b3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:58:18 crc kubenswrapper[4858]: I1122 07:58:18.011215 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zj544"] Nov 22 07:58:18 crc kubenswrapper[4858]: I1122 07:58:18.016348 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zj544"] Nov 22 07:58:18 crc kubenswrapper[4858]: I1122 07:58:18.027388 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac3cd25-37ed-497b-973f-d236fbac1b3f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:19 crc kubenswrapper[4858]: I1122 07:58:19.547876 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" path="/var/lib/kubelet/pods/8ac3cd25-37ed-497b-973f-d236fbac1b3f/volumes" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.971119 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48r87"] Nov 22 07:58:25 crc kubenswrapper[4858]: E1122 07:58:25.971823 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="extract-utilities" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.971840 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="extract-utilities" Nov 22 07:58:25 crc kubenswrapper[4858]: E1122 07:58:25.971854 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="extract-content" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.971861 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="extract-content" Nov 22 07:58:25 crc kubenswrapper[4858]: E1122 07:58:25.971886 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="registry-server" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.971894 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="registry-server" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.972593 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac3cd25-37ed-497b-973f-d236fbac1b3f" containerName="registry-server" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.974792 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:25 crc kubenswrapper[4858]: I1122 07:58:25.986617 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48r87"] Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.139146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqxp\" (UniqueName: \"kubernetes.io/projected/983b2378-722d-4533-978b-fbeaac5c1596-kube-api-access-wxqxp\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.139200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-utilities\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.139256 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-catalog-content\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.240487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxqxp\" (UniqueName: \"kubernetes.io/projected/983b2378-722d-4533-978b-fbeaac5c1596-kube-api-access-wxqxp\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.241033 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-utilities\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.241602 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-utilities\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.241743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-catalog-content\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.242042 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-catalog-content\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.261213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxqxp\" (UniqueName: \"kubernetes.io/projected/983b2378-722d-4533-978b-fbeaac5c1596-kube-api-access-wxqxp\") pod \"certified-operators-48r87\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.299417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:26 crc kubenswrapper[4858]: I1122 07:58:26.804405 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48r87"] Nov 22 07:58:27 crc kubenswrapper[4858]: I1122 07:58:27.378885 4858 generic.go:334] "Generic (PLEG): container finished" podID="983b2378-722d-4533-978b-fbeaac5c1596" containerID="fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472" exitCode=0 Nov 22 07:58:27 crc kubenswrapper[4858]: I1122 07:58:27.378950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48r87" event={"ID":"983b2378-722d-4533-978b-fbeaac5c1596","Type":"ContainerDied","Data":"fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472"} Nov 22 07:58:27 crc kubenswrapper[4858]: I1122 07:58:27.378983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48r87" event={"ID":"983b2378-722d-4533-978b-fbeaac5c1596","Type":"ContainerStarted","Data":"e04ea0cf6b19cd91c4e7cb5608d1663787e1102c0dedf9bf93eb6a258265d959"} Nov 22 07:58:28 crc kubenswrapper[4858]: I1122 07:58:28.393259 4858 generic.go:334] "Generic (PLEG): container finished" podID="983b2378-722d-4533-978b-fbeaac5c1596" containerID="1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845" exitCode=0 Nov 22 07:58:28 crc kubenswrapper[4858]: I1122 07:58:28.393406 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48r87" event={"ID":"983b2378-722d-4533-978b-fbeaac5c1596","Type":"ContainerDied","Data":"1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845"} Nov 22 07:58:29 crc kubenswrapper[4858]: I1122 07:58:29.405522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48r87" event={"ID":"983b2378-722d-4533-978b-fbeaac5c1596","Type":"ContainerStarted","Data":"e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3"} Nov 22 07:58:36 crc kubenswrapper[4858]: I1122 07:58:36.299994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:36 crc kubenswrapper[4858]: I1122 07:58:36.300916 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:36 crc kubenswrapper[4858]: I1122 07:58:36.344832 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:36 crc kubenswrapper[4858]: I1122 07:58:36.366344 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48r87" podStartSLOduration=9.923902078 podStartE2EDuration="11.366297018s" podCreationTimestamp="2025-11-22 07:58:25 +0000 UTC" firstStartedPulling="2025-11-22 07:58:27.380789698 +0000 UTC m=+2869.222212704" lastFinishedPulling="2025-11-22 07:58:28.823184638 +0000 UTC m=+2870.664607644" observedRunningTime="2025-11-22 07:58:29.428419873 +0000 UTC m=+2871.269842899" watchObservedRunningTime="2025-11-22 07:58:36.366297018 +0000 UTC m=+2878.207720024" Nov 22 07:58:36 crc kubenswrapper[4858]: I1122 07:58:36.501914 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:36 crc kubenswrapper[4858]: I1122 07:58:36.583451 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48r87"] Nov 22 07:58:38 crc kubenswrapper[4858]: I1122 07:58:38.470601 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-48r87" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="registry-server" containerID="cri-o://e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3" gracePeriod=2 Nov 22 07:58:38 crc kubenswrapper[4858]: I1122 07:58:38.912345 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.028107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxqxp\" (UniqueName: \"kubernetes.io/projected/983b2378-722d-4533-978b-fbeaac5c1596-kube-api-access-wxqxp\") pod \"983b2378-722d-4533-978b-fbeaac5c1596\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.029028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-utilities\") pod \"983b2378-722d-4533-978b-fbeaac5c1596\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.029106 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-catalog-content\") pod \"983b2378-722d-4533-978b-fbeaac5c1596\" (UID: \"983b2378-722d-4533-978b-fbeaac5c1596\") " Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.030205 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-utilities" (OuterVolumeSpecName: "utilities") pod "983b2378-722d-4533-978b-fbeaac5c1596" (UID: "983b2378-722d-4533-978b-fbeaac5c1596"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.035415 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983b2378-722d-4533-978b-fbeaac5c1596-kube-api-access-wxqxp" (OuterVolumeSpecName: "kube-api-access-wxqxp") pod "983b2378-722d-4533-978b-fbeaac5c1596" (UID: "983b2378-722d-4533-978b-fbeaac5c1596"). InnerVolumeSpecName "kube-api-access-wxqxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.130942 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.130986 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxqxp\" (UniqueName: \"kubernetes.io/projected/983b2378-722d-4533-978b-fbeaac5c1596-kube-api-access-wxqxp\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.481768 4858 generic.go:334] "Generic (PLEG): container finished" podID="983b2378-722d-4533-978b-fbeaac5c1596" containerID="e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3" exitCode=0 Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.481835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48r87" event={"ID":"983b2378-722d-4533-978b-fbeaac5c1596","Type":"ContainerDied","Data":"e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3"} Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.481873 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48r87" event={"ID":"983b2378-722d-4533-978b-fbeaac5c1596","Type":"ContainerDied","Data":"e04ea0cf6b19cd91c4e7cb5608d1663787e1102c0dedf9bf93eb6a258265d959"} Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.481894 4858 scope.go:117] "RemoveContainer" containerID="e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.482216 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48r87" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.505860 4858 scope.go:117] "RemoveContainer" containerID="1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.528203 4858 scope.go:117] "RemoveContainer" containerID="fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.560742 4858 scope.go:117] "RemoveContainer" containerID="e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3" Nov 22 07:58:39 crc kubenswrapper[4858]: E1122 07:58:39.561624 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3\": container with ID starting with e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3 not found: ID does not exist" containerID="e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.561748 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3"} err="failed to get container status \"e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3\": rpc error: code = NotFound desc = could not find container \"e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3\": container with ID starting with e8f7af8b9c8dcbec222c86e27f260a2370781b21a1dea4f6a975cfb4c6ec61e3 not found: ID does not exist" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.561849 4858 scope.go:117] "RemoveContainer" containerID="1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845" Nov 22 07:58:39 crc kubenswrapper[4858]: E1122 07:58:39.562242 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845\": container with ID starting with 1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845 not found: ID does not exist" containerID="1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.562364 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845"} err="failed to get container status \"1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845\": rpc error: code = NotFound desc = could not find container \"1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845\": container with ID starting with 1a9a601f5793bbe2aaae844c910a3df602a64fa2d60a4f8404c4a268eac96845 not found: ID does not exist" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.562475 4858 scope.go:117] "RemoveContainer" containerID="fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472" Nov 22 07:58:39 crc kubenswrapper[4858]: E1122 07:58:39.562838 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472\": container with ID starting with fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472 not found: ID does not exist" containerID="fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.562877 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472"} err="failed to get container status \"fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472\": rpc error: code = NotFound desc = could not find container \"fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472\": container with ID starting with fe68488d8bb287e64bdd5c9ec84371b55fe8e437997946a3de6e2b5bf5e84472 not found: ID does not exist" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.678518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "983b2378-722d-4533-978b-fbeaac5c1596" (UID: "983b2378-722d-4533-978b-fbeaac5c1596"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.746586 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983b2378-722d-4533-978b-fbeaac5c1596-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.821570 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48r87"] Nov 22 07:58:39 crc kubenswrapper[4858]: I1122 07:58:39.826756 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-48r87"] Nov 22 07:58:41 crc kubenswrapper[4858]: I1122 07:58:41.545456 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983b2378-722d-4533-978b-fbeaac5c1596" path="/var/lib/kubelet/pods/983b2378-722d-4533-978b-fbeaac5c1596/volumes" Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.311963 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.312360 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.312434 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.313200 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcbe0d32b87589e2a737ce0d00303efbb3bc376344bcfa93706f1eaa597b064a"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.313260 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://dcbe0d32b87589e2a737ce0d00303efbb3bc376344bcfa93706f1eaa597b064a" gracePeriod=600 Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.537842 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="dcbe0d32b87589e2a737ce0d00303efbb3bc376344bcfa93706f1eaa597b064a" exitCode=0 Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.546428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"dcbe0d32b87589e2a737ce0d00303efbb3bc376344bcfa93706f1eaa597b064a"} Nov 22 07:58:45 crc kubenswrapper[4858]: I1122 07:58:45.546511 4858 scope.go:117] "RemoveContainer" containerID="b21310f4a57f8247489c2b4c86c621e9ed3340041d0ea038d09264f4bbdb888a" Nov 22 07:58:46 crc kubenswrapper[4858]: I1122 07:58:46.549666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428"} Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.182283 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt"] Nov 22 08:00:00 crc kubenswrapper[4858]: E1122 08:00:00.183158 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.183175 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4858]: E1122 08:00:00.183202 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.183208 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4858]: E1122 08:00:00.183220 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.183225 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.183434 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="983b2378-722d-4533-978b-fbeaac5c1596" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.184109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.186876 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.187230 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.223579 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt"] Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.246578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43c6519f-81ff-402d-abb4-1dd51ba8a85c-config-volume\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.246645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43c6519f-81ff-402d-abb4-1dd51ba8a85c-secret-volume\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.246715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjcz\" (UniqueName: \"kubernetes.io/projected/43c6519f-81ff-402d-abb4-1dd51ba8a85c-kube-api-access-dtjcz\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.348580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43c6519f-81ff-402d-abb4-1dd51ba8a85c-config-volume\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.348638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43c6519f-81ff-402d-abb4-1dd51ba8a85c-secret-volume\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.348700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjcz\" (UniqueName: \"kubernetes.io/projected/43c6519f-81ff-402d-abb4-1dd51ba8a85c-kube-api-access-dtjcz\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.350234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43c6519f-81ff-402d-abb4-1dd51ba8a85c-config-volume\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.355057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43c6519f-81ff-402d-abb4-1dd51ba8a85c-secret-volume\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.369788 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjcz\" (UniqueName: \"kubernetes.io/projected/43c6519f-81ff-402d-abb4-1dd51ba8a85c-kube-api-access-dtjcz\") pod \"collect-profiles-29396640-jhsgt\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.518453 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:00 crc kubenswrapper[4858]: I1122 08:00:00.969458 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt"] Nov 22 08:00:01 crc kubenswrapper[4858]: I1122 08:00:01.111797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" event={"ID":"43c6519f-81ff-402d-abb4-1dd51ba8a85c","Type":"ContainerStarted","Data":"90ab5ae94da54489fb78559cd309ed6b684dfd42d8770e3e1f7c64b5ee8bc7a2"} Nov 22 08:00:02 crc kubenswrapper[4858]: I1122 08:00:02.122268 4858 generic.go:334] "Generic (PLEG): container finished" podID="43c6519f-81ff-402d-abb4-1dd51ba8a85c" containerID="8a895b353a5e3fd683763f205a08e337dc3cf9576ba69cd1ee05d9566036363d" exitCode=0 Nov 22 08:00:02 crc kubenswrapper[4858]: I1122 08:00:02.122373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" event={"ID":"43c6519f-81ff-402d-abb4-1dd51ba8a85c","Type":"ContainerDied","Data":"8a895b353a5e3fd683763f205a08e337dc3cf9576ba69cd1ee05d9566036363d"} Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.407659 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.589890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtjcz\" (UniqueName: \"kubernetes.io/projected/43c6519f-81ff-402d-abb4-1dd51ba8a85c-kube-api-access-dtjcz\") pod \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.590464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43c6519f-81ff-402d-abb4-1dd51ba8a85c-secret-volume\") pod \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.590951 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43c6519f-81ff-402d-abb4-1dd51ba8a85c-config-volume\") pod \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\" (UID: \"43c6519f-81ff-402d-abb4-1dd51ba8a85c\") " Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.592460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43c6519f-81ff-402d-abb4-1dd51ba8a85c-config-volume" (OuterVolumeSpecName: "config-volume") pod "43c6519f-81ff-402d-abb4-1dd51ba8a85c" (UID: "43c6519f-81ff-402d-abb4-1dd51ba8a85c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.595983 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c6519f-81ff-402d-abb4-1dd51ba8a85c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43c6519f-81ff-402d-abb4-1dd51ba8a85c" (UID: "43c6519f-81ff-402d-abb4-1dd51ba8a85c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.596171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c6519f-81ff-402d-abb4-1dd51ba8a85c-kube-api-access-dtjcz" (OuterVolumeSpecName: "kube-api-access-dtjcz") pod "43c6519f-81ff-402d-abb4-1dd51ba8a85c" (UID: "43c6519f-81ff-402d-abb4-1dd51ba8a85c"). InnerVolumeSpecName "kube-api-access-dtjcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.693221 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43c6519f-81ff-402d-abb4-1dd51ba8a85c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.693262 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43c6519f-81ff-402d-abb4-1dd51ba8a85c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:03 crc kubenswrapper[4858]: I1122 08:00:03.693275 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtjcz\" (UniqueName: \"kubernetes.io/projected/43c6519f-81ff-402d-abb4-1dd51ba8a85c-kube-api-access-dtjcz\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:04 crc kubenswrapper[4858]: I1122 08:00:04.138440 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" event={"ID":"43c6519f-81ff-402d-abb4-1dd51ba8a85c","Type":"ContainerDied","Data":"90ab5ae94da54489fb78559cd309ed6b684dfd42d8770e3e1f7c64b5ee8bc7a2"} Nov 22 08:00:04 crc kubenswrapper[4858]: I1122 08:00:04.138489 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ab5ae94da54489fb78559cd309ed6b684dfd42d8770e3e1f7c64b5ee8bc7a2" Nov 22 08:00:04 crc kubenswrapper[4858]: I1122 08:00:04.138556 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt" Nov 22 08:00:04 crc kubenswrapper[4858]: I1122 08:00:04.482925 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42"] Nov 22 08:00:04 crc kubenswrapper[4858]: I1122 08:00:04.488108 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-qdx42"] Nov 22 08:00:05 crc kubenswrapper[4858]: I1122 08:00:05.552294 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a5eb82-d541-4f36-bed3-dda09042ee97" path="/var/lib/kubelet/pods/50a5eb82-d541-4f36-bed3-dda09042ee97/volumes" Nov 22 08:00:31 crc kubenswrapper[4858]: I1122 08:00:31.705852 4858 scope.go:117] "RemoveContainer" containerID="8c3b967ae6e961650bf7999871aa30c469a28a62fdcac606bef5d45c0f0697ec" Nov 22 08:00:45 crc kubenswrapper[4858]: I1122 08:00:45.312130 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:00:45 crc kubenswrapper[4858]: I1122 08:00:45.312708 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:01:15 crc kubenswrapper[4858]: I1122 08:01:15.312498 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:01:15 crc kubenswrapper[4858]: I1122 08:01:15.313059 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.134073 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r8r6r"] Nov 22 08:01:18 crc kubenswrapper[4858]: E1122 08:01:18.135881 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c6519f-81ff-402d-abb4-1dd51ba8a85c" containerName="collect-profiles" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.136029 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c6519f-81ff-402d-abb4-1dd51ba8a85c" containerName="collect-profiles" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.136296 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c6519f-81ff-402d-abb4-1dd51ba8a85c" containerName="collect-profiles" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.137948 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.149044 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8r6r"] Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.203404 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-catalog-content\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.203461 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf72g\" (UniqueName: \"kubernetes.io/projected/4339e02b-baeb-46be-a0ec-408982b898d8-kube-api-access-pf72g\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.203496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-utilities\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.304923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-catalog-content\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.304982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf72g\" (UniqueName: \"kubernetes.io/projected/4339e02b-baeb-46be-a0ec-408982b898d8-kube-api-access-pf72g\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.305014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-utilities\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.305582 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-utilities\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.305944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-catalog-content\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.327742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf72g\" (UniqueName: \"kubernetes.io/projected/4339e02b-baeb-46be-a0ec-408982b898d8-kube-api-access-pf72g\") pod \"redhat-marketplace-r8r6r\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.479797 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:18 crc kubenswrapper[4858]: I1122 08:01:18.907868 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8r6r"] Nov 22 08:01:19 crc kubenswrapper[4858]: I1122 08:01:19.696624 4858 generic.go:334] "Generic (PLEG): container finished" podID="4339e02b-baeb-46be-a0ec-408982b898d8" containerID="9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8" exitCode=0 Nov 22 08:01:19 crc kubenswrapper[4858]: I1122 08:01:19.696963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8r6r" event={"ID":"4339e02b-baeb-46be-a0ec-408982b898d8","Type":"ContainerDied","Data":"9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8"} Nov 22 08:01:19 crc kubenswrapper[4858]: I1122 08:01:19.697030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8r6r" event={"ID":"4339e02b-baeb-46be-a0ec-408982b898d8","Type":"ContainerStarted","Data":"c5a731e01d3800861ef6905cc9a3e8bab9e4da9da3fbd519fe99f4c10cb8a260"} Nov 22 08:01:20 crc kubenswrapper[4858]: I1122 08:01:20.706459 4858 generic.go:334] "Generic (PLEG): container finished" podID="4339e02b-baeb-46be-a0ec-408982b898d8" containerID="892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5" exitCode=0 Nov 22 08:01:20 crc kubenswrapper[4858]: I1122 08:01:20.706511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8r6r" event={"ID":"4339e02b-baeb-46be-a0ec-408982b898d8","Type":"ContainerDied","Data":"892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5"} Nov 22 08:01:21 crc kubenswrapper[4858]: I1122 08:01:21.719012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8r6r" event={"ID":"4339e02b-baeb-46be-a0ec-408982b898d8","Type":"ContainerStarted","Data":"195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4"} Nov 22 08:01:21 crc kubenswrapper[4858]: I1122 08:01:21.737714 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r8r6r" podStartSLOduration=2.288730206 podStartE2EDuration="3.737688409s" podCreationTimestamp="2025-11-22 08:01:18 +0000 UTC" firstStartedPulling="2025-11-22 08:01:19.700226792 +0000 UTC m=+3041.541649798" lastFinishedPulling="2025-11-22 08:01:21.149184985 +0000 UTC m=+3042.990608001" observedRunningTime="2025-11-22 08:01:21.736943805 +0000 UTC m=+3043.578366821" watchObservedRunningTime="2025-11-22 08:01:21.737688409 +0000 UTC m=+3043.579111425" Nov 22 08:01:28 crc kubenswrapper[4858]: I1122 08:01:28.480398 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:28 crc kubenswrapper[4858]: I1122 08:01:28.482110 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:28 crc kubenswrapper[4858]: I1122 08:01:28.523515 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:28 crc kubenswrapper[4858]: I1122 08:01:28.804697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:29 crc kubenswrapper[4858]: I1122 08:01:29.701850 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8r6r"] Nov 22 08:01:30 crc kubenswrapper[4858]: I1122 08:01:30.783817 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r8r6r" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="registry-server" containerID="cri-o://195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4" gracePeriod=2 Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.182538 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.294713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-catalog-content\") pod \"4339e02b-baeb-46be-a0ec-408982b898d8\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.294815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-utilities\") pod \"4339e02b-baeb-46be-a0ec-408982b898d8\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.294889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf72g\" (UniqueName: \"kubernetes.io/projected/4339e02b-baeb-46be-a0ec-408982b898d8-kube-api-access-pf72g\") pod \"4339e02b-baeb-46be-a0ec-408982b898d8\" (UID: \"4339e02b-baeb-46be-a0ec-408982b898d8\") " Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.295969 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-utilities" (OuterVolumeSpecName: "utilities") pod "4339e02b-baeb-46be-a0ec-408982b898d8" (UID: "4339e02b-baeb-46be-a0ec-408982b898d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.300963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4339e02b-baeb-46be-a0ec-408982b898d8-kube-api-access-pf72g" (OuterVolumeSpecName: "kube-api-access-pf72g") pod "4339e02b-baeb-46be-a0ec-408982b898d8" (UID: "4339e02b-baeb-46be-a0ec-408982b898d8"). InnerVolumeSpecName "kube-api-access-pf72g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.396444 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf72g\" (UniqueName: \"kubernetes.io/projected/4339e02b-baeb-46be-a0ec-408982b898d8-kube-api-access-pf72g\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.396479 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.630467 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4339e02b-baeb-46be-a0ec-408982b898d8" (UID: "4339e02b-baeb-46be-a0ec-408982b898d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.700113 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4339e02b-baeb-46be-a0ec-408982b898d8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.791106 4858 generic.go:334] "Generic (PLEG): container finished" podID="4339e02b-baeb-46be-a0ec-408982b898d8" containerID="195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4" exitCode=0 Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.791159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8r6r" event={"ID":"4339e02b-baeb-46be-a0ec-408982b898d8","Type":"ContainerDied","Data":"195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4"} Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.791193 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8r6r" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.791212 4858 scope.go:117] "RemoveContainer" containerID="195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.791198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8r6r" event={"ID":"4339e02b-baeb-46be-a0ec-408982b898d8","Type":"ContainerDied","Data":"c5a731e01d3800861ef6905cc9a3e8bab9e4da9da3fbd519fe99f4c10cb8a260"} Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.810582 4858 scope.go:117] "RemoveContainer" containerID="892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.822365 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8r6r"] Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.827724 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8r6r"] Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.828975 4858 scope.go:117] "RemoveContainer" containerID="9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.858747 4858 scope.go:117] "RemoveContainer" containerID="195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4" Nov 22 08:01:31 crc kubenswrapper[4858]: E1122 08:01:31.859530 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4\": container with ID starting with 195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4 not found: ID does not exist" containerID="195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.859560 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4"} err="failed to get container status \"195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4\": rpc error: code = NotFound desc = could not find container \"195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4\": container with ID starting with 195ec72a5dbdf251b9128713c3e0f703716dedf480a985d1c38d83cda19514f4 not found: ID does not exist" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.859582 4858 scope.go:117] "RemoveContainer" containerID="892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5" Nov 22 08:01:31 crc kubenswrapper[4858]: E1122 08:01:31.859967 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5\": container with ID starting with 892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5 not found: ID does not exist" containerID="892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.859995 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5"} err="failed to get container status \"892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5\": rpc error: code = NotFound desc = could not find container \"892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5\": container with ID starting with 892a3e3e52c280cb12218c135d828d8e6716528fbcfb495dd0400512e7d0e8c5 not found: ID does not exist" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.860009 4858 scope.go:117] "RemoveContainer" containerID="9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8" Nov 22 08:01:31 crc kubenswrapper[4858]: E1122 08:01:31.860284 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8\": container with ID starting with 9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8 not found: ID does not exist" containerID="9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8" Nov 22 08:01:31 crc kubenswrapper[4858]: I1122 08:01:31.860441 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8"} err="failed to get container status \"9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8\": rpc error: code = NotFound desc = could not find container \"9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8\": container with ID starting with 9573e5d7b749b64341406db07ff58843c443d98893c1343d8487bd16c7ef0dc8 not found: ID does not exist" Nov 22 08:01:33 crc kubenswrapper[4858]: I1122 08:01:33.547132 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" path="/var/lib/kubelet/pods/4339e02b-baeb-46be-a0ec-408982b898d8/volumes" Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.311811 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.312441 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.312513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.313304 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.313383 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" gracePeriod=600 Nov 22 08:01:45 crc kubenswrapper[4858]: E1122 08:01:45.459372 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.896044 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" exitCode=0 Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.896129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428"} Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.896480 4858 scope.go:117] "RemoveContainer" containerID="dcbe0d32b87589e2a737ce0d00303efbb3bc376344bcfa93706f1eaa597b064a" Nov 22 08:01:45 crc kubenswrapper[4858]: I1122 08:01:45.898155 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:01:45 crc kubenswrapper[4858]: E1122 08:01:45.898553 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:01:58 crc kubenswrapper[4858]: I1122 08:01:58.535810 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:01:58 crc kubenswrapper[4858]: E1122 08:01:58.536574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:02:13 crc kubenswrapper[4858]: I1122 08:02:13.536591 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:02:13 crc kubenswrapper[4858]: E1122 08:02:13.537474 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:02:26 crc kubenswrapper[4858]: I1122 08:02:26.535675 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:02:26 crc kubenswrapper[4858]: E1122 08:02:26.536495 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:02:40 crc kubenswrapper[4858]: I1122 08:02:40.536038 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:02:40 crc kubenswrapper[4858]: E1122 08:02:40.536817 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:02:53 crc kubenswrapper[4858]: I1122 08:02:53.536373 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:02:53 crc kubenswrapper[4858]: E1122 08:02:53.537344 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:03:05 crc kubenswrapper[4858]: I1122 08:03:05.535998 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:03:05 crc kubenswrapper[4858]: E1122 08:03:05.536805 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:03:20 crc kubenswrapper[4858]: I1122 08:03:20.535792 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:03:20 crc kubenswrapper[4858]: E1122 08:03:20.536616 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:03:34 crc kubenswrapper[4858]: I1122 08:03:34.535857 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:03:34 crc kubenswrapper[4858]: E1122 08:03:34.536673 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:03:48 crc kubenswrapper[4858]: I1122 08:03:48.535584 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:03:48 crc kubenswrapper[4858]: E1122 08:03:48.536420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:04:01 crc kubenswrapper[4858]: I1122 08:04:01.535767 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:04:01 crc kubenswrapper[4858]: E1122 08:04:01.536661 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:04:14 crc kubenswrapper[4858]: I1122 08:04:14.536079 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:04:14 crc kubenswrapper[4858]: E1122 08:04:14.536893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:04:29 crc kubenswrapper[4858]: I1122 08:04:29.539418 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:04:29 crc kubenswrapper[4858]: E1122 08:04:29.540202 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:04:41 crc kubenswrapper[4858]: I1122 08:04:41.535805 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:04:41 crc kubenswrapper[4858]: E1122 08:04:41.536619 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:04:52 crc kubenswrapper[4858]: I1122 08:04:52.536492 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:04:52 crc kubenswrapper[4858]: E1122 08:04:52.537261 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:05:04 crc kubenswrapper[4858]: I1122 08:05:04.535833 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:05:04 crc kubenswrapper[4858]: E1122 08:05:04.536573 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:05:16 crc kubenswrapper[4858]: I1122 08:05:16.535723 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:05:16 crc kubenswrapper[4858]: E1122 08:05:16.536470 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:05:29 crc kubenswrapper[4858]: I1122 08:05:29.541122 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:05:29 crc kubenswrapper[4858]: E1122 08:05:29.543477 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:05:44 crc kubenswrapper[4858]: I1122 08:05:44.536367 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:05:44 crc kubenswrapper[4858]: E1122 08:05:44.537109 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:05:55 crc kubenswrapper[4858]: I1122 08:05:55.536249 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:05:55 crc kubenswrapper[4858]: E1122 08:05:55.536908 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:06:09 crc kubenswrapper[4858]: I1122 08:06:09.539060 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:06:09 crc kubenswrapper[4858]: E1122 08:06:09.539733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:06:21 crc kubenswrapper[4858]: I1122 08:06:21.536390 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:06:21 crc kubenswrapper[4858]: E1122 08:06:21.537292 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:06:35 crc kubenswrapper[4858]: I1122 08:06:35.536019 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:06:35 crc kubenswrapper[4858]: E1122 08:06:35.536740 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:06:50 crc kubenswrapper[4858]: I1122 08:06:50.535714 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:06:51 crc kubenswrapper[4858]: I1122 08:06:51.268142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"f2a6ff3ce11e69fde774af3ed5d0895172fc718c722ae0b17ab73258fcb77eae"} Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.531122 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ms2d5"] Nov 22 08:08:33 crc kubenswrapper[4858]: E1122 08:08:33.532066 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="registry-server" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.532086 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="registry-server" Nov 22 08:08:33 crc kubenswrapper[4858]: E1122 08:08:33.532106 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="extract-content" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.532113 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="extract-content" Nov 22 08:08:33 crc kubenswrapper[4858]: E1122 08:08:33.532147 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="extract-utilities" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.532158 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="extract-utilities" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.532356 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4339e02b-baeb-46be-a0ec-408982b898d8" containerName="registry-server" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.533732 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.550636 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ms2d5"] Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.683347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrtx\" (UniqueName: \"kubernetes.io/projected/724f95c7-e92a-4030-877e-0f4c19475441-kube-api-access-gwrtx\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.683854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-utilities\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.684010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-catalog-content\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.785864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-catalog-content\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.785992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrtx\" (UniqueName: \"kubernetes.io/projected/724f95c7-e92a-4030-877e-0f4c19475441-kube-api-access-gwrtx\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.786047 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-utilities\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.786657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-utilities\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.786810 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-catalog-content\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.808376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrtx\" (UniqueName: \"kubernetes.io/projected/724f95c7-e92a-4030-877e-0f4c19475441-kube-api-access-gwrtx\") pod \"community-operators-ms2d5\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:33 crc kubenswrapper[4858]: I1122 08:08:33.856904 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:34 crc kubenswrapper[4858]: I1122 08:08:34.373382 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ms2d5"] Nov 22 08:08:35 crc kubenswrapper[4858]: I1122 08:08:35.079955 4858 generic.go:334] "Generic (PLEG): container finished" podID="724f95c7-e92a-4030-877e-0f4c19475441" containerID="dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12" exitCode=0 Nov 22 08:08:35 crc kubenswrapper[4858]: I1122 08:08:35.080014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms2d5" event={"ID":"724f95c7-e92a-4030-877e-0f4c19475441","Type":"ContainerDied","Data":"dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12"} Nov 22 08:08:35 crc kubenswrapper[4858]: I1122 08:08:35.080304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms2d5" event={"ID":"724f95c7-e92a-4030-877e-0f4c19475441","Type":"ContainerStarted","Data":"bb406472442bde4acbf9a37b9583119c1836134db81d06d2c8913633ea831dec"} Nov 22 08:08:35 crc kubenswrapper[4858]: I1122 08:08:35.081693 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:08:36 crc kubenswrapper[4858]: I1122 08:08:36.090445 4858 generic.go:334] "Generic (PLEG): container finished" podID="724f95c7-e92a-4030-877e-0f4c19475441" containerID="e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b" exitCode=0 Nov 22 08:08:36 crc kubenswrapper[4858]: I1122 08:08:36.090544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms2d5" event={"ID":"724f95c7-e92a-4030-877e-0f4c19475441","Type":"ContainerDied","Data":"e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b"} Nov 22 08:08:37 crc kubenswrapper[4858]: I1122 08:08:37.101982 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms2d5" event={"ID":"724f95c7-e92a-4030-877e-0f4c19475441","Type":"ContainerStarted","Data":"3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd"} Nov 22 08:08:37 crc kubenswrapper[4858]: I1122 08:08:37.122021 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ms2d5" podStartSLOduration=2.703460751 podStartE2EDuration="4.12198944s" podCreationTimestamp="2025-11-22 08:08:33 +0000 UTC" firstStartedPulling="2025-11-22 08:08:35.081473336 +0000 UTC m=+3476.922896342" lastFinishedPulling="2025-11-22 08:08:36.500002025 +0000 UTC m=+3478.341425031" observedRunningTime="2025-11-22 08:08:37.117012181 +0000 UTC m=+3478.958435207" watchObservedRunningTime="2025-11-22 08:08:37.12198944 +0000 UTC m=+3478.963412446" Nov 22 08:08:43 crc kubenswrapper[4858]: I1122 08:08:43.857795 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:43 crc kubenswrapper[4858]: I1122 08:08:43.858435 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:43 crc kubenswrapper[4858]: I1122 08:08:43.910995 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:44 crc kubenswrapper[4858]: I1122 08:08:44.198211 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:44 crc kubenswrapper[4858]: I1122 08:08:44.243601 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ms2d5"] Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.170759 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ms2d5" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="registry-server" containerID="cri-o://3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd" gracePeriod=2 Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.575455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.599773 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrtx\" (UniqueName: \"kubernetes.io/projected/724f95c7-e92a-4030-877e-0f4c19475441-kube-api-access-gwrtx\") pod \"724f95c7-e92a-4030-877e-0f4c19475441\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.599903 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-utilities\") pod \"724f95c7-e92a-4030-877e-0f4c19475441\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.599942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-catalog-content\") pod \"724f95c7-e92a-4030-877e-0f4c19475441\" (UID: \"724f95c7-e92a-4030-877e-0f4c19475441\") " Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.601640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-utilities" (OuterVolumeSpecName: "utilities") pod "724f95c7-e92a-4030-877e-0f4c19475441" (UID: "724f95c7-e92a-4030-877e-0f4c19475441"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.607231 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724f95c7-e92a-4030-877e-0f4c19475441-kube-api-access-gwrtx" (OuterVolumeSpecName: "kube-api-access-gwrtx") pod "724f95c7-e92a-4030-877e-0f4c19475441" (UID: "724f95c7-e92a-4030-877e-0f4c19475441"). InnerVolumeSpecName "kube-api-access-gwrtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.702222 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrtx\" (UniqueName: \"kubernetes.io/projected/724f95c7-e92a-4030-877e-0f4c19475441-kube-api-access-gwrtx\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.702274 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.851849 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "724f95c7-e92a-4030-877e-0f4c19475441" (UID: "724f95c7-e92a-4030-877e-0f4c19475441"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:08:46 crc kubenswrapper[4858]: I1122 08:08:46.906214 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/724f95c7-e92a-4030-877e-0f4c19475441-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.184979 4858 generic.go:334] "Generic (PLEG): container finished" podID="724f95c7-e92a-4030-877e-0f4c19475441" containerID="3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd" exitCode=0 Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.185041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms2d5" event={"ID":"724f95c7-e92a-4030-877e-0f4c19475441","Type":"ContainerDied","Data":"3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd"} Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.185089 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ms2d5" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.185125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ms2d5" event={"ID":"724f95c7-e92a-4030-877e-0f4c19475441","Type":"ContainerDied","Data":"bb406472442bde4acbf9a37b9583119c1836134db81d06d2c8913633ea831dec"} Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.185151 4858 scope.go:117] "RemoveContainer" containerID="3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.208834 4858 scope.go:117] "RemoveContainer" containerID="e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.227463 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ms2d5"] Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.234262 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ms2d5"] Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.238630 4858 scope.go:117] "RemoveContainer" containerID="dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.257996 4858 scope.go:117] "RemoveContainer" containerID="3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd" Nov 22 08:08:47 crc kubenswrapper[4858]: E1122 08:08:47.259086 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd\": container with ID starting with 3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd not found: ID does not exist" containerID="3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.259162 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd"} err="failed to get container status \"3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd\": rpc error: code = NotFound desc = could not find container \"3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd\": container with ID starting with 3de82728804adf4dda01688de6bd92fedfe86a2213ecdde438303cb4410756bd not found: ID does not exist" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.259220 4858 scope.go:117] "RemoveContainer" containerID="e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b" Nov 22 08:08:47 crc kubenswrapper[4858]: E1122 08:08:47.259855 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b\": container with ID starting with e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b not found: ID does not exist" containerID="e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.259879 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b"} err="failed to get container status \"e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b\": rpc error: code = NotFound desc = could not find container \"e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b\": container with ID starting with e07353a7e05113691f5acbc8d67bcc8b0db02aa49530d6c68b1e9c3da1fbb86b not found: ID does not exist" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.259908 4858 scope.go:117] "RemoveContainer" containerID="dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12" Nov 22 08:08:47 crc kubenswrapper[4858]: E1122 08:08:47.260134 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12\": container with ID starting with dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12 not found: ID does not exist" containerID="dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.260177 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12"} err="failed to get container status \"dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12\": rpc error: code = NotFound desc = could not find container \"dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12\": container with ID starting with dcc166bf438ceb4dd890a9344624e9d1f81e15b52a5956a2cbaf5f296cdd0e12 not found: ID does not exist" Nov 22 08:08:47 crc kubenswrapper[4858]: I1122 08:08:47.545997 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724f95c7-e92a-4030-877e-0f4c19475441" path="/var/lib/kubelet/pods/724f95c7-e92a-4030-877e-0f4c19475441/volumes" Nov 22 08:09:15 crc kubenswrapper[4858]: I1122 08:09:15.312088 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:09:15 crc kubenswrapper[4858]: I1122 08:09:15.312696 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.276853 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5jtcc"] Nov 22 08:09:44 crc kubenswrapper[4858]: E1122 08:09:44.278009 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="extract-utilities" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.278027 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="extract-utilities" Nov 22 08:09:44 crc kubenswrapper[4858]: E1122 08:09:44.278044 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="extract-content" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.278050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="extract-content" Nov 22 08:09:44 crc kubenswrapper[4858]: E1122 08:09:44.278067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="registry-server" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.278076 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="registry-server" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.278325 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="724f95c7-e92a-4030-877e-0f4c19475441" containerName="registry-server" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.279592 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.293866 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jtcc"] Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.356730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-utilities\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.356799 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zxn\" (UniqueName: \"kubernetes.io/projected/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-kube-api-access-t8zxn\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.356840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-catalog-content\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.458416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-utilities\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.458517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zxn\" (UniqueName: \"kubernetes.io/projected/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-kube-api-access-t8zxn\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.458558 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-catalog-content\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.459027 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-utilities\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.459089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-catalog-content\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.484456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zxn\" (UniqueName: \"kubernetes.io/projected/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-kube-api-access-t8zxn\") pod \"certified-operators-5jtcc\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:44 crc kubenswrapper[4858]: I1122 08:09:44.609783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:45 crc kubenswrapper[4858]: I1122 08:09:45.255907 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jtcc"] Nov 22 08:09:45 crc kubenswrapper[4858]: I1122 08:09:45.312918 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:09:45 crc kubenswrapper[4858]: I1122 08:09:45.313011 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:09:45 crc kubenswrapper[4858]: I1122 08:09:45.659256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerStarted","Data":"d38ba321eacad88732781302313f5f791a91dd28362312601c2195c668122aa1"} Nov 22 08:09:46 crc kubenswrapper[4858]: I1122 08:09:46.678861 4858 generic.go:334] "Generic (PLEG): container finished" podID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerID="eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc" exitCode=0 Nov 22 08:09:46 crc kubenswrapper[4858]: I1122 08:09:46.678948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerDied","Data":"eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc"} Nov 22 08:09:47 crc kubenswrapper[4858]: I1122 08:09:47.690596 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerStarted","Data":"91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784"} Nov 22 08:09:48 crc kubenswrapper[4858]: I1122 08:09:48.707179 4858 generic.go:334] "Generic (PLEG): container finished" podID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerID="91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784" exitCode=0 Nov 22 08:09:48 crc kubenswrapper[4858]: I1122 08:09:48.707229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerDied","Data":"91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784"} Nov 22 08:09:49 crc kubenswrapper[4858]: I1122 08:09:49.722620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerStarted","Data":"ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c"} Nov 22 08:09:49 crc kubenswrapper[4858]: I1122 08:09:49.749016 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5jtcc" podStartSLOduration=3.268605688 podStartE2EDuration="5.748975798s" podCreationTimestamp="2025-11-22 08:09:44 +0000 UTC" firstStartedPulling="2025-11-22 08:09:46.681023594 +0000 UTC m=+3548.522446600" lastFinishedPulling="2025-11-22 08:09:49.161393704 +0000 UTC m=+3551.002816710" observedRunningTime="2025-11-22 08:09:49.743157832 +0000 UTC m=+3551.584580848" watchObservedRunningTime="2025-11-22 08:09:49.748975798 +0000 UTC m=+3551.590398804" Nov 22 08:09:54 crc kubenswrapper[4858]: I1122 08:09:54.610129 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:54 crc kubenswrapper[4858]: I1122 08:09:54.610859 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:54 crc kubenswrapper[4858]: I1122 08:09:54.657314 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:54 crc kubenswrapper[4858]: I1122 08:09:54.797183 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:54 crc kubenswrapper[4858]: I1122 08:09:54.896062 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jtcc"] Nov 22 08:09:56 crc kubenswrapper[4858]: I1122 08:09:56.769946 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5jtcc" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="registry-server" containerID="cri-o://ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c" gracePeriod=2 Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.160599 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.256988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-catalog-content\") pod \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.257074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8zxn\" (UniqueName: \"kubernetes.io/projected/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-kube-api-access-t8zxn\") pod \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.257199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-utilities\") pod \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\" (UID: \"de8aab87-cb1a-45a7-ab55-d7abb4d324f9\") " Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.258279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-utilities" (OuterVolumeSpecName: "utilities") pod "de8aab87-cb1a-45a7-ab55-d7abb4d324f9" (UID: "de8aab87-cb1a-45a7-ab55-d7abb4d324f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.263575 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-kube-api-access-t8zxn" (OuterVolumeSpecName: "kube-api-access-t8zxn") pod "de8aab87-cb1a-45a7-ab55-d7abb4d324f9" (UID: "de8aab87-cb1a-45a7-ab55-d7abb4d324f9"). InnerVolumeSpecName "kube-api-access-t8zxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.310109 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de8aab87-cb1a-45a7-ab55-d7abb4d324f9" (UID: "de8aab87-cb1a-45a7-ab55-d7abb4d324f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.358798 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.358845 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8zxn\" (UniqueName: \"kubernetes.io/projected/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-kube-api-access-t8zxn\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.358858 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8aab87-cb1a-45a7-ab55-d7abb4d324f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.781587 4858 generic.go:334] "Generic (PLEG): container finished" podID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerID="ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c" exitCode=0 Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.781655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerDied","Data":"ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c"} Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.781690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jtcc" event={"ID":"de8aab87-cb1a-45a7-ab55-d7abb4d324f9","Type":"ContainerDied","Data":"d38ba321eacad88732781302313f5f791a91dd28362312601c2195c668122aa1"} Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.781710 4858 scope.go:117] "RemoveContainer" containerID="ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.781861 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jtcc" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.810029 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jtcc"] Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.818671 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5jtcc"] Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.820849 4858 scope.go:117] "RemoveContainer" containerID="91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.847195 4858 scope.go:117] "RemoveContainer" containerID="eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.873385 4858 scope.go:117] "RemoveContainer" containerID="ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c" Nov 22 08:09:57 crc kubenswrapper[4858]: E1122 08:09:57.874060 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c\": container with ID starting with ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c not found: ID does not exist" containerID="ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.874104 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c"} err="failed to get container status \"ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c\": rpc error: code = NotFound desc = could not find container \"ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c\": container with ID starting with ada68ed45c189b080226770e9611bcfd84cc9d92336f725fc23f855370a0393c not found: ID does not exist" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.874135 4858 scope.go:117] "RemoveContainer" containerID="91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784" Nov 22 08:09:57 crc kubenswrapper[4858]: E1122 08:09:57.875376 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784\": container with ID starting with 91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784 not found: ID does not exist" containerID="91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.875412 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784"} err="failed to get container status \"91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784\": rpc error: code = NotFound desc = could not find container \"91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784\": container with ID starting with 91cc68821751b46e811955a0eb49b135c45bcc78a2f4bb7c19be1bcfa5439784 not found: ID does not exist" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.875431 4858 scope.go:117] "RemoveContainer" containerID="eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc" Nov 22 08:09:57 crc kubenswrapper[4858]: E1122 08:09:57.875834 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc\": container with ID starting with eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc not found: ID does not exist" containerID="eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc" Nov 22 08:09:57 crc kubenswrapper[4858]: I1122 08:09:57.875862 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc"} err="failed to get container status \"eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc\": rpc error: code = NotFound desc = could not find container \"eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc\": container with ID starting with eba398a0dee95d3fd5a1dc91347a8d5e5aca21474bf4dec48166285ff0922ecc not found: ID does not exist" Nov 22 08:09:59 crc kubenswrapper[4858]: I1122 08:09:59.545090 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" path="/var/lib/kubelet/pods/de8aab87-cb1a-45a7-ab55-d7abb4d324f9/volumes" Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.312165 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.313489 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.313558 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.314142 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2a6ff3ce11e69fde774af3ed5d0895172fc718c722ae0b17ab73258fcb77eae"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.314216 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://f2a6ff3ce11e69fde774af3ed5d0895172fc718c722ae0b17ab73258fcb77eae" gracePeriod=600 Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.928377 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="f2a6ff3ce11e69fde774af3ed5d0895172fc718c722ae0b17ab73258fcb77eae" exitCode=0 Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.928432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"f2a6ff3ce11e69fde774af3ed5d0895172fc718c722ae0b17ab73258fcb77eae"} Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.928775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894"} Nov 22 08:10:15 crc kubenswrapper[4858]: I1122 08:10:15.928798 4858 scope.go:117] "RemoveContainer" containerID="0c51e807ead3828025b0fb5fde3f58a0fdea7bb34363b3101ca542793b3c4428" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.619531 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9ws"] Nov 22 08:11:47 crc kubenswrapper[4858]: E1122 08:11:47.620556 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="extract-utilities" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.620575 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="extract-utilities" Nov 22 08:11:47 crc kubenswrapper[4858]: E1122 08:11:47.620601 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="extract-content" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.620610 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="extract-content" Nov 22 08:11:47 crc kubenswrapper[4858]: E1122 08:11:47.620625 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="registry-server" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.620631 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="registry-server" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.620770 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8aab87-cb1a-45a7-ab55-d7abb4d324f9" containerName="registry-server" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.622111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.632943 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9ws"] Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.805986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-catalog-content\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.806067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbxs8\" (UniqueName: \"kubernetes.io/projected/c136df08-15d1-4209-b9b3-59baf47304c5-kube-api-access-lbxs8\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.806909 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-utilities\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.908424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-catalog-content\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.908501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbxs8\" (UniqueName: \"kubernetes.io/projected/c136df08-15d1-4209-b9b3-59baf47304c5-kube-api-access-lbxs8\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.908572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-utilities\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.909216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-utilities\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.909233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-catalog-content\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.940798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbxs8\" (UniqueName: \"kubernetes.io/projected/c136df08-15d1-4209-b9b3-59baf47304c5-kube-api-access-lbxs8\") pod \"redhat-marketplace-mf9ws\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:47 crc kubenswrapper[4858]: I1122 08:11:47.946471 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:48 crc kubenswrapper[4858]: I1122 08:11:48.166392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9ws"] Nov 22 08:11:48 crc kubenswrapper[4858]: I1122 08:11:48.688478 4858 generic.go:334] "Generic (PLEG): container finished" podID="c136df08-15d1-4209-b9b3-59baf47304c5" containerID="b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82" exitCode=0 Nov 22 08:11:48 crc kubenswrapper[4858]: I1122 08:11:48.688892 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerDied","Data":"b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82"} Nov 22 08:11:48 crc kubenswrapper[4858]: I1122 08:11:48.688928 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerStarted","Data":"abf0aaba02ed59e9ef22d59e83db2270383d28a882843683a22983057dd5b976"} Nov 22 08:11:49 crc kubenswrapper[4858]: I1122 08:11:49.698351 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerStarted","Data":"ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf"} Nov 22 08:11:50 crc kubenswrapper[4858]: I1122 08:11:50.714952 4858 generic.go:334] "Generic (PLEG): container finished" podID="c136df08-15d1-4209-b9b3-59baf47304c5" containerID="ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf" exitCode=0 Nov 22 08:11:50 crc kubenswrapper[4858]: I1122 08:11:50.715152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerDied","Data":"ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf"} Nov 22 08:11:51 crc kubenswrapper[4858]: I1122 08:11:51.725712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerStarted","Data":"743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756"} Nov 22 08:11:51 crc kubenswrapper[4858]: I1122 08:11:51.749964 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mf9ws" podStartSLOduration=2.330998223 podStartE2EDuration="4.749942706s" podCreationTimestamp="2025-11-22 08:11:47 +0000 UTC" firstStartedPulling="2025-11-22 08:11:48.690975739 +0000 UTC m=+3670.532398745" lastFinishedPulling="2025-11-22 08:11:51.109920212 +0000 UTC m=+3672.951343228" observedRunningTime="2025-11-22 08:11:51.748135977 +0000 UTC m=+3673.589558983" watchObservedRunningTime="2025-11-22 08:11:51.749942706 +0000 UTC m=+3673.591365712" Nov 22 08:11:57 crc kubenswrapper[4858]: I1122 08:11:57.947518 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:57 crc kubenswrapper[4858]: I1122 08:11:57.948555 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:58 crc kubenswrapper[4858]: I1122 08:11:58.001998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:58 crc kubenswrapper[4858]: I1122 08:11:58.822961 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:11:58 crc kubenswrapper[4858]: I1122 08:11:58.874960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9ws"] Nov 22 08:12:00 crc kubenswrapper[4858]: I1122 08:12:00.790150 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mf9ws" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="registry-server" containerID="cri-o://743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756" gracePeriod=2 Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.248249 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.421755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbxs8\" (UniqueName: \"kubernetes.io/projected/c136df08-15d1-4209-b9b3-59baf47304c5-kube-api-access-lbxs8\") pod \"c136df08-15d1-4209-b9b3-59baf47304c5\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.421853 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-utilities\") pod \"c136df08-15d1-4209-b9b3-59baf47304c5\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.421885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-catalog-content\") pod \"c136df08-15d1-4209-b9b3-59baf47304c5\" (UID: \"c136df08-15d1-4209-b9b3-59baf47304c5\") " Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.423417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-utilities" (OuterVolumeSpecName: "utilities") pod "c136df08-15d1-4209-b9b3-59baf47304c5" (UID: "c136df08-15d1-4209-b9b3-59baf47304c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.431661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c136df08-15d1-4209-b9b3-59baf47304c5-kube-api-access-lbxs8" (OuterVolumeSpecName: "kube-api-access-lbxs8") pod "c136df08-15d1-4209-b9b3-59baf47304c5" (UID: "c136df08-15d1-4209-b9b3-59baf47304c5"). InnerVolumeSpecName "kube-api-access-lbxs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.442878 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c136df08-15d1-4209-b9b3-59baf47304c5" (UID: "c136df08-15d1-4209-b9b3-59baf47304c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.524907 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbxs8\" (UniqueName: \"kubernetes.io/projected/c136df08-15d1-4209-b9b3-59baf47304c5-kube-api-access-lbxs8\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.524962 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.524977 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136df08-15d1-4209-b9b3-59baf47304c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.803019 4858 generic.go:334] "Generic (PLEG): container finished" podID="c136df08-15d1-4209-b9b3-59baf47304c5" containerID="743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756" exitCode=0 Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.803117 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9ws" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.803109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerDied","Data":"743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756"} Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.803280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9ws" event={"ID":"c136df08-15d1-4209-b9b3-59baf47304c5","Type":"ContainerDied","Data":"abf0aaba02ed59e9ef22d59e83db2270383d28a882843683a22983057dd5b976"} Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.803332 4858 scope.go:117] "RemoveContainer" containerID="743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.829057 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9ws"] Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.837596 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9ws"] Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.838916 4858 scope.go:117] "RemoveContainer" containerID="ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.863075 4858 scope.go:117] "RemoveContainer" containerID="b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.896898 4858 scope.go:117] "RemoveContainer" containerID="743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756" Nov 22 08:12:01 crc kubenswrapper[4858]: E1122 08:12:01.897740 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756\": container with ID starting with 743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756 not found: ID does not exist" containerID="743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.897811 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756"} err="failed to get container status \"743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756\": rpc error: code = NotFound desc = could not find container \"743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756\": container with ID starting with 743a7c62007e0b4c187a3e2bea09b1dc6170787b9889864ff706b4033018c756 not found: ID does not exist" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.897840 4858 scope.go:117] "RemoveContainer" containerID="ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf" Nov 22 08:12:01 crc kubenswrapper[4858]: E1122 08:12:01.898194 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf\": container with ID starting with ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf not found: ID does not exist" containerID="ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.898226 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf"} err="failed to get container status \"ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf\": rpc error: code = NotFound desc = could not find container \"ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf\": container with ID starting with ab801aa75cbb77f0aa7ee23f6d8e170c19d98be88cb32afcee93a6ae062c1cbf not found: ID does not exist" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.898252 4858 scope.go:117] "RemoveContainer" containerID="b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82" Nov 22 08:12:01 crc kubenswrapper[4858]: E1122 08:12:01.898806 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82\": container with ID starting with b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82 not found: ID does not exist" containerID="b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82" Nov 22 08:12:01 crc kubenswrapper[4858]: I1122 08:12:01.898868 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82"} err="failed to get container status \"b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82\": rpc error: code = NotFound desc = could not find container \"b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82\": container with ID starting with b50733acb601eec89d8e7ca1a28c40179a795d23e6841f30c16be46174ef3f82 not found: ID does not exist" Nov 22 08:12:03 crc kubenswrapper[4858]: I1122 08:12:03.545409 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" path="/var/lib/kubelet/pods/c136df08-15d1-4209-b9b3-59baf47304c5/volumes" Nov 22 08:12:15 crc kubenswrapper[4858]: I1122 08:12:15.311693 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:12:15 crc kubenswrapper[4858]: I1122 08:12:15.312344 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:12:45 crc kubenswrapper[4858]: I1122 08:12:45.312592 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:12:45 crc kubenswrapper[4858]: I1122 08:12:45.313685 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:13:15 crc kubenswrapper[4858]: I1122 08:13:15.312023 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:13:15 crc kubenswrapper[4858]: I1122 08:13:15.312729 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:13:15 crc kubenswrapper[4858]: I1122 08:13:15.312793 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:13:15 crc kubenswrapper[4858]: I1122 08:13:15.313578 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:13:15 crc kubenswrapper[4858]: I1122 08:13:15.313646 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" gracePeriod=600 Nov 22 08:13:15 crc kubenswrapper[4858]: E1122 08:13:15.437076 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:13:16 crc kubenswrapper[4858]: I1122 08:13:16.392870 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" exitCode=0 Nov 22 08:13:16 crc kubenswrapper[4858]: I1122 08:13:16.392934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894"} Nov 22 08:13:16 crc kubenswrapper[4858]: I1122 08:13:16.392980 4858 scope.go:117] "RemoveContainer" containerID="f2a6ff3ce11e69fde774af3ed5d0895172fc718c722ae0b17ab73258fcb77eae" Nov 22 08:13:16 crc kubenswrapper[4858]: I1122 08:13:16.393601 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:13:16 crc kubenswrapper[4858]: E1122 08:13:16.393892 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:13:30 crc kubenswrapper[4858]: I1122 08:13:30.536673 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:13:30 crc kubenswrapper[4858]: E1122 08:13:30.537704 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:13:43 crc kubenswrapper[4858]: I1122 08:13:43.536463 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:13:43 crc kubenswrapper[4858]: E1122 08:13:43.539055 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:13:54 crc kubenswrapper[4858]: I1122 08:13:54.535707 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:13:54 crc kubenswrapper[4858]: E1122 08:13:54.536492 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:14:08 crc kubenswrapper[4858]: I1122 08:14:08.536242 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:14:08 crc kubenswrapper[4858]: E1122 08:14:08.537064 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:14:23 crc kubenswrapper[4858]: I1122 08:14:23.535773 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:14:23 crc kubenswrapper[4858]: E1122 08:14:23.536724 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:14:34 crc kubenswrapper[4858]: I1122 08:14:34.536228 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:14:34 crc kubenswrapper[4858]: E1122 08:14:34.536925 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:14:47 crc kubenswrapper[4858]: I1122 08:14:47.535885 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:14:47 crc kubenswrapper[4858]: E1122 08:14:47.536702 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.147553 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx"] Nov 22 08:15:00 crc kubenswrapper[4858]: E1122 08:15:00.148733 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="registry-server" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.148755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="registry-server" Nov 22 08:15:00 crc kubenswrapper[4858]: E1122 08:15:00.148770 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="extract-content" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.148778 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="extract-content" Nov 22 08:15:00 crc kubenswrapper[4858]: E1122 08:15:00.148800 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="extract-utilities" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.148809 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="extract-utilities" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.148992 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c136df08-15d1-4209-b9b3-59baf47304c5" containerName="registry-server" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.149806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.152445 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.152546 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.158123 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx"] Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.262509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5pr7\" (UniqueName: \"kubernetes.io/projected/3ebfdfb9-2131-4121-9dae-064a4b885a05-kube-api-access-n5pr7\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.262654 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ebfdfb9-2131-4121-9dae-064a4b885a05-config-volume\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.262687 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ebfdfb9-2131-4121-9dae-064a4b885a05-secret-volume\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.363949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ebfdfb9-2131-4121-9dae-064a4b885a05-secret-volume\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.364058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5pr7\" (UniqueName: \"kubernetes.io/projected/3ebfdfb9-2131-4121-9dae-064a4b885a05-kube-api-access-n5pr7\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.364150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ebfdfb9-2131-4121-9dae-064a4b885a05-config-volume\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.365912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ebfdfb9-2131-4121-9dae-064a4b885a05-config-volume\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.374541 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ebfdfb9-2131-4121-9dae-064a4b885a05-secret-volume\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.383723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5pr7\" (UniqueName: \"kubernetes.io/projected/3ebfdfb9-2131-4121-9dae-064a4b885a05-kube-api-access-n5pr7\") pod \"collect-profiles-29396655-7chsx\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.502512 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:00 crc kubenswrapper[4858]: I1122 08:15:00.957855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx"] Nov 22 08:15:01 crc kubenswrapper[4858]: I1122 08:15:01.209273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" event={"ID":"3ebfdfb9-2131-4121-9dae-064a4b885a05","Type":"ContainerStarted","Data":"b044029ce3e54adb5612d812fb3916758e78c4418a56af329f21156022826308"} Nov 22 08:15:02 crc kubenswrapper[4858]: I1122 08:15:02.219570 4858 generic.go:334] "Generic (PLEG): container finished" podID="3ebfdfb9-2131-4121-9dae-064a4b885a05" containerID="496ed9a2a1df605e0e7725217c93c999e6e2f725fa8f67414f1fbf259bf00721" exitCode=0 Nov 22 08:15:02 crc kubenswrapper[4858]: I1122 08:15:02.219628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" event={"ID":"3ebfdfb9-2131-4121-9dae-064a4b885a05","Type":"ContainerDied","Data":"496ed9a2a1df605e0e7725217c93c999e6e2f725fa8f67414f1fbf259bf00721"} Nov 22 08:15:02 crc kubenswrapper[4858]: I1122 08:15:02.535370 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:15:02 crc kubenswrapper[4858]: E1122 08:15:02.535651 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.521861 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.625722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ebfdfb9-2131-4121-9dae-064a4b885a05-config-volume\") pod \"3ebfdfb9-2131-4121-9dae-064a4b885a05\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.625856 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5pr7\" (UniqueName: \"kubernetes.io/projected/3ebfdfb9-2131-4121-9dae-064a4b885a05-kube-api-access-n5pr7\") pod \"3ebfdfb9-2131-4121-9dae-064a4b885a05\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.625975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ebfdfb9-2131-4121-9dae-064a4b885a05-secret-volume\") pod \"3ebfdfb9-2131-4121-9dae-064a4b885a05\" (UID: \"3ebfdfb9-2131-4121-9dae-064a4b885a05\") " Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.626682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ebfdfb9-2131-4121-9dae-064a4b885a05-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ebfdfb9-2131-4121-9dae-064a4b885a05" (UID: "3ebfdfb9-2131-4121-9dae-064a4b885a05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.631798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ebfdfb9-2131-4121-9dae-064a4b885a05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ebfdfb9-2131-4121-9dae-064a4b885a05" (UID: "3ebfdfb9-2131-4121-9dae-064a4b885a05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.632794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ebfdfb9-2131-4121-9dae-064a4b885a05-kube-api-access-n5pr7" (OuterVolumeSpecName: "kube-api-access-n5pr7") pod "3ebfdfb9-2131-4121-9dae-064a4b885a05" (UID: "3ebfdfb9-2131-4121-9dae-064a4b885a05"). InnerVolumeSpecName "kube-api-access-n5pr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.728057 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ebfdfb9-2131-4121-9dae-064a4b885a05-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.728103 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ebfdfb9-2131-4121-9dae-064a4b885a05-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4858]: I1122 08:15:03.728115 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5pr7\" (UniqueName: \"kubernetes.io/projected/3ebfdfb9-2131-4121-9dae-064a4b885a05-kube-api-access-n5pr7\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:04 crc kubenswrapper[4858]: I1122 08:15:04.247895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" event={"ID":"3ebfdfb9-2131-4121-9dae-064a4b885a05","Type":"ContainerDied","Data":"b044029ce3e54adb5612d812fb3916758e78c4418a56af329f21156022826308"} Nov 22 08:15:04 crc kubenswrapper[4858]: I1122 08:15:04.248227 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b044029ce3e54adb5612d812fb3916758e78c4418a56af329f21156022826308" Nov 22 08:15:04 crc kubenswrapper[4858]: I1122 08:15:04.248321 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx" Nov 22 08:15:04 crc kubenswrapper[4858]: I1122 08:15:04.605301 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68"] Nov 22 08:15:04 crc kubenswrapper[4858]: I1122 08:15:04.613174 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-xdf68"] Nov 22 08:15:05 crc kubenswrapper[4858]: I1122 08:15:05.550594 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec7658a-c01c-4e79-9c95-c591bc5af55d" path="/var/lib/kubelet/pods/bec7658a-c01c-4e79-9c95-c591bc5af55d/volumes" Nov 22 08:15:17 crc kubenswrapper[4858]: I1122 08:15:17.536345 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:15:17 crc kubenswrapper[4858]: E1122 08:15:17.537209 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.030131 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tvjcm"] Nov 22 08:15:25 crc kubenswrapper[4858]: E1122 08:15:25.031026 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ebfdfb9-2131-4121-9dae-064a4b885a05" containerName="collect-profiles" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.031047 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ebfdfb9-2131-4121-9dae-064a4b885a05" containerName="collect-profiles" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.031206 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ebfdfb9-2131-4121-9dae-064a4b885a05" containerName="collect-profiles" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.032400 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.038489 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tvjcm"] Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.141836 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkzd\" (UniqueName: \"kubernetes.io/projected/152de895-6428-4e17-b648-28fecc09c1ce-kube-api-access-7nkzd\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.141904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-catalog-content\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.142045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-utilities\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.243252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkzd\" (UniqueName: \"kubernetes.io/projected/152de895-6428-4e17-b648-28fecc09c1ce-kube-api-access-7nkzd\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.243313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-catalog-content\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.243361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-utilities\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.243832 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-utilities\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.243983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-catalog-content\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.572129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkzd\" (UniqueName: \"kubernetes.io/projected/152de895-6428-4e17-b648-28fecc09c1ce-kube-api-access-7nkzd\") pod \"redhat-operators-tvjcm\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:25 crc kubenswrapper[4858]: I1122 08:15:25.660104 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:26 crc kubenswrapper[4858]: I1122 08:15:26.143961 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tvjcm"] Nov 22 08:15:26 crc kubenswrapper[4858]: I1122 08:15:26.417094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerStarted","Data":"bf7562033667b40872997b47af92a36e902c9ab4886642a01f3a20d290c2eb63"} Nov 22 08:15:27 crc kubenswrapper[4858]: I1122 08:15:27.428954 4858 generic.go:334] "Generic (PLEG): container finished" podID="152de895-6428-4e17-b648-28fecc09c1ce" containerID="fca47cf7b08623e29250a91a49628a858600773f409d61940693e3bae0eb5d4b" exitCode=0 Nov 22 08:15:27 crc kubenswrapper[4858]: I1122 08:15:27.429091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerDied","Data":"fca47cf7b08623e29250a91a49628a858600773f409d61940693e3bae0eb5d4b"} Nov 22 08:15:27 crc kubenswrapper[4858]: I1122 08:15:27.431560 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:15:28 crc kubenswrapper[4858]: I1122 08:15:28.439724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerStarted","Data":"cd16647b72169224d0531b70f4b2fe8fbc1c158cb2e755eabdd330c0cce9feb2"} Nov 22 08:15:29 crc kubenswrapper[4858]: I1122 08:15:29.449391 4858 generic.go:334] "Generic (PLEG): container finished" podID="152de895-6428-4e17-b648-28fecc09c1ce" containerID="cd16647b72169224d0531b70f4b2fe8fbc1c158cb2e755eabdd330c0cce9feb2" exitCode=0 Nov 22 08:15:29 crc kubenswrapper[4858]: I1122 08:15:29.449726 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerDied","Data":"cd16647b72169224d0531b70f4b2fe8fbc1c158cb2e755eabdd330c0cce9feb2"} Nov 22 08:15:30 crc kubenswrapper[4858]: I1122 08:15:30.460921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerStarted","Data":"e3aa7ac2f8b78a96455635339fb2e8bc4286b08957ef894d916e37896951727d"} Nov 22 08:15:30 crc kubenswrapper[4858]: I1122 08:15:30.485790 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tvjcm" podStartSLOduration=3.026027697 podStartE2EDuration="5.485761726s" podCreationTimestamp="2025-11-22 08:15:25 +0000 UTC" firstStartedPulling="2025-11-22 08:15:27.431257312 +0000 UTC m=+3889.272680318" lastFinishedPulling="2025-11-22 08:15:29.890991341 +0000 UTC m=+3891.732414347" observedRunningTime="2025-11-22 08:15:30.479777704 +0000 UTC m=+3892.321200710" watchObservedRunningTime="2025-11-22 08:15:30.485761726 +0000 UTC m=+3892.327184732" Nov 22 08:15:32 crc kubenswrapper[4858]: I1122 08:15:32.039953 4858 scope.go:117] "RemoveContainer" containerID="42e9dbf0d5ef9081d0abc2ecbd765b454768fad3024d85081fdcfabbc9bec948" Nov 22 08:15:32 crc kubenswrapper[4858]: I1122 08:15:32.535515 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:15:32 crc kubenswrapper[4858]: E1122 08:15:32.536190 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:15:35 crc kubenswrapper[4858]: I1122 08:15:35.661237 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:35 crc kubenswrapper[4858]: I1122 08:15:35.661573 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:35 crc kubenswrapper[4858]: I1122 08:15:35.708260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:36 crc kubenswrapper[4858]: I1122 08:15:36.545809 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:37 crc kubenswrapper[4858]: I1122 08:15:37.621369 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tvjcm"] Nov 22 08:15:38 crc kubenswrapper[4858]: I1122 08:15:38.520243 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tvjcm" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="registry-server" containerID="cri-o://e3aa7ac2f8b78a96455635339fb2e8bc4286b08957ef894d916e37896951727d" gracePeriod=2 Nov 22 08:15:39 crc kubenswrapper[4858]: I1122 08:15:39.529766 4858 generic.go:334] "Generic (PLEG): container finished" podID="152de895-6428-4e17-b648-28fecc09c1ce" containerID="e3aa7ac2f8b78a96455635339fb2e8bc4286b08957ef894d916e37896951727d" exitCode=0 Nov 22 08:15:39 crc kubenswrapper[4858]: I1122 08:15:39.529845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerDied","Data":"e3aa7ac2f8b78a96455635339fb2e8bc4286b08957ef894d916e37896951727d"} Nov 22 08:15:39 crc kubenswrapper[4858]: I1122 08:15:39.920092 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.093883 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-catalog-content\") pod \"152de895-6428-4e17-b648-28fecc09c1ce\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.093984 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-utilities\") pod \"152de895-6428-4e17-b648-28fecc09c1ce\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.094108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nkzd\" (UniqueName: \"kubernetes.io/projected/152de895-6428-4e17-b648-28fecc09c1ce-kube-api-access-7nkzd\") pod \"152de895-6428-4e17-b648-28fecc09c1ce\" (UID: \"152de895-6428-4e17-b648-28fecc09c1ce\") " Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.095370 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-utilities" (OuterVolumeSpecName: "utilities") pod "152de895-6428-4e17-b648-28fecc09c1ce" (UID: "152de895-6428-4e17-b648-28fecc09c1ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.100160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152de895-6428-4e17-b648-28fecc09c1ce-kube-api-access-7nkzd" (OuterVolumeSpecName: "kube-api-access-7nkzd") pod "152de895-6428-4e17-b648-28fecc09c1ce" (UID: "152de895-6428-4e17-b648-28fecc09c1ce"). InnerVolumeSpecName "kube-api-access-7nkzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.188782 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "152de895-6428-4e17-b648-28fecc09c1ce" (UID: "152de895-6428-4e17-b648-28fecc09c1ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.195510 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.195562 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152de895-6428-4e17-b648-28fecc09c1ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.195575 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nkzd\" (UniqueName: \"kubernetes.io/projected/152de895-6428-4e17-b648-28fecc09c1ce-kube-api-access-7nkzd\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.542068 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tvjcm" event={"ID":"152de895-6428-4e17-b648-28fecc09c1ce","Type":"ContainerDied","Data":"bf7562033667b40872997b47af92a36e902c9ab4886642a01f3a20d290c2eb63"} Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.542170 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tvjcm" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.542423 4858 scope.go:117] "RemoveContainer" containerID="e3aa7ac2f8b78a96455635339fb2e8bc4286b08957ef894d916e37896951727d" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.568910 4858 scope.go:117] "RemoveContainer" containerID="cd16647b72169224d0531b70f4b2fe8fbc1c158cb2e755eabdd330c0cce9feb2" Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.580499 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tvjcm"] Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.585787 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tvjcm"] Nov 22 08:15:40 crc kubenswrapper[4858]: I1122 08:15:40.603628 4858 scope.go:117] "RemoveContainer" containerID="fca47cf7b08623e29250a91a49628a858600773f409d61940693e3bae0eb5d4b" Nov 22 08:15:41 crc kubenswrapper[4858]: I1122 08:15:41.548358 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152de895-6428-4e17-b648-28fecc09c1ce" path="/var/lib/kubelet/pods/152de895-6428-4e17-b648-28fecc09c1ce/volumes" Nov 22 08:15:45 crc kubenswrapper[4858]: I1122 08:15:45.536815 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:15:45 crc kubenswrapper[4858]: E1122 08:15:45.537954 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:15:58 crc kubenswrapper[4858]: I1122 08:15:58.536298 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:15:58 crc kubenswrapper[4858]: E1122 08:15:58.537705 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:16:10 crc kubenswrapper[4858]: I1122 08:16:10.535185 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:16:10 crc kubenswrapper[4858]: E1122 08:16:10.535945 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:16:24 crc kubenswrapper[4858]: I1122 08:16:24.536036 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:16:24 crc kubenswrapper[4858]: E1122 08:16:24.536873 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:16:38 crc kubenswrapper[4858]: I1122 08:16:38.536115 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:16:38 crc kubenswrapper[4858]: E1122 08:16:38.537483 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:16:53 crc kubenswrapper[4858]: I1122 08:16:53.536451 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:16:53 crc kubenswrapper[4858]: E1122 08:16:53.537308 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:17:05 crc kubenswrapper[4858]: I1122 08:17:05.536110 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:17:05 crc kubenswrapper[4858]: E1122 08:17:05.537485 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:17:17 crc kubenswrapper[4858]: I1122 08:17:17.538409 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:17:17 crc kubenswrapper[4858]: E1122 08:17:17.539441 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:17:30 crc kubenswrapper[4858]: I1122 08:17:30.537490 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:17:30 crc kubenswrapper[4858]: E1122 08:17:30.538295 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:17:41 crc kubenswrapper[4858]: I1122 08:17:41.536798 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:17:41 crc kubenswrapper[4858]: E1122 08:17:41.537669 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:17:52 crc kubenswrapper[4858]: I1122 08:17:52.535544 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:17:52 crc kubenswrapper[4858]: E1122 08:17:52.536275 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:18:03 crc kubenswrapper[4858]: I1122 08:18:03.536163 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:18:03 crc kubenswrapper[4858]: E1122 08:18:03.537266 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:18:16 crc kubenswrapper[4858]: I1122 08:18:16.535736 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:18:16 crc kubenswrapper[4858]: I1122 08:18:16.753718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"3df7838fa67a1d57097e2056d897b19816f7aaa1e83b353834835bfa3131d6f9"} Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.847798 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5x69k"] Nov 22 08:19:02 crc kubenswrapper[4858]: E1122 08:19:02.848727 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="extract-content" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.848778 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="extract-content" Nov 22 08:19:02 crc kubenswrapper[4858]: E1122 08:19:02.848803 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="registry-server" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.848813 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="registry-server" Nov 22 08:19:02 crc kubenswrapper[4858]: E1122 08:19:02.848827 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="extract-utilities" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.848835 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="extract-utilities" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.849005 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="152de895-6428-4e17-b648-28fecc09c1ce" containerName="registry-server" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.850521 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.862853 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5x69k"] Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.958650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-catalog-content\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.958737 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-utilities\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:02 crc kubenswrapper[4858]: I1122 08:19:02.958882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntld\" (UniqueName: \"kubernetes.io/projected/19a25a9f-6f87-47b1-bbb0-b85211b61266-kube-api-access-vntld\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.060940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-catalog-content\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.060976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-catalog-content\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.061115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-utilities\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.061295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntld\" (UniqueName: \"kubernetes.io/projected/19a25a9f-6f87-47b1-bbb0-b85211b61266-kube-api-access-vntld\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.061537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-utilities\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.086233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntld\" (UniqueName: \"kubernetes.io/projected/19a25a9f-6f87-47b1-bbb0-b85211b61266-kube-api-access-vntld\") pod \"community-operators-5x69k\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.171105 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:03 crc kubenswrapper[4858]: I1122 08:19:03.718436 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5x69k"] Nov 22 08:19:04 crc kubenswrapper[4858]: I1122 08:19:04.165560 4858 generic.go:334] "Generic (PLEG): container finished" podID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerID="aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751" exitCode=0 Nov 22 08:19:04 crc kubenswrapper[4858]: I1122 08:19:04.165666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerDied","Data":"aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751"} Nov 22 08:19:04 crc kubenswrapper[4858]: I1122 08:19:04.166029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerStarted","Data":"2ddb6c78c175fc631344a1355506ce9d132da0a8b5a3df49da122d765f3de754"} Nov 22 08:19:05 crc kubenswrapper[4858]: I1122 08:19:05.178534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerStarted","Data":"9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17"} Nov 22 08:19:06 crc kubenswrapper[4858]: I1122 08:19:06.188820 4858 generic.go:334] "Generic (PLEG): container finished" podID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerID="9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17" exitCode=0 Nov 22 08:19:06 crc kubenswrapper[4858]: I1122 08:19:06.188896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerDied","Data":"9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17"} Nov 22 08:19:07 crc kubenswrapper[4858]: I1122 08:19:07.201681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerStarted","Data":"7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2"} Nov 22 08:19:07 crc kubenswrapper[4858]: I1122 08:19:07.228813 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5x69k" podStartSLOduration=2.81455299 podStartE2EDuration="5.228787935s" podCreationTimestamp="2025-11-22 08:19:02 +0000 UTC" firstStartedPulling="2025-11-22 08:19:04.168775787 +0000 UTC m=+4106.010198793" lastFinishedPulling="2025-11-22 08:19:06.583010732 +0000 UTC m=+4108.424433738" observedRunningTime="2025-11-22 08:19:07.22177284 +0000 UTC m=+4109.063195856" watchObservedRunningTime="2025-11-22 08:19:07.228787935 +0000 UTC m=+4109.070210941" Nov 22 08:19:13 crc kubenswrapper[4858]: I1122 08:19:13.171802 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:13 crc kubenswrapper[4858]: I1122 08:19:13.172441 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:13 crc kubenswrapper[4858]: I1122 08:19:13.217956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:13 crc kubenswrapper[4858]: I1122 08:19:13.286577 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:13 crc kubenswrapper[4858]: I1122 08:19:13.452365 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5x69k"] Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.267481 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5x69k" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="registry-server" containerID="cri-o://7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2" gracePeriod=2 Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.650752 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.754781 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vntld\" (UniqueName: \"kubernetes.io/projected/19a25a9f-6f87-47b1-bbb0-b85211b61266-kube-api-access-vntld\") pod \"19a25a9f-6f87-47b1-bbb0-b85211b61266\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.754911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-catalog-content\") pod \"19a25a9f-6f87-47b1-bbb0-b85211b61266\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.754961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-utilities\") pod \"19a25a9f-6f87-47b1-bbb0-b85211b61266\" (UID: \"19a25a9f-6f87-47b1-bbb0-b85211b61266\") " Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.756864 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-utilities" (OuterVolumeSpecName: "utilities") pod "19a25a9f-6f87-47b1-bbb0-b85211b61266" (UID: "19a25a9f-6f87-47b1-bbb0-b85211b61266"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.762543 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a25a9f-6f87-47b1-bbb0-b85211b61266-kube-api-access-vntld" (OuterVolumeSpecName: "kube-api-access-vntld") pod "19a25a9f-6f87-47b1-bbb0-b85211b61266" (UID: "19a25a9f-6f87-47b1-bbb0-b85211b61266"). InnerVolumeSpecName "kube-api-access-vntld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.856997 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vntld\" (UniqueName: \"kubernetes.io/projected/19a25a9f-6f87-47b1-bbb0-b85211b61266-kube-api-access-vntld\") on node \"crc\" DevicePath \"\"" Nov 22 08:19:15 crc kubenswrapper[4858]: I1122 08:19:15.857043 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.278840 4858 generic.go:334] "Generic (PLEG): container finished" podID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerID="7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2" exitCode=0 Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.278908 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5x69k" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.278930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerDied","Data":"7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2"} Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.279375 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5x69k" event={"ID":"19a25a9f-6f87-47b1-bbb0-b85211b61266","Type":"ContainerDied","Data":"2ddb6c78c175fc631344a1355506ce9d132da0a8b5a3df49da122d765f3de754"} Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.279408 4858 scope.go:117] "RemoveContainer" containerID="7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.299529 4858 scope.go:117] "RemoveContainer" containerID="9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.365111 4858 scope.go:117] "RemoveContainer" containerID="aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.391100 4858 scope.go:117] "RemoveContainer" containerID="7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2" Nov 22 08:19:16 crc kubenswrapper[4858]: E1122 08:19:16.392515 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2\": container with ID starting with 7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2 not found: ID does not exist" containerID="7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.392592 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2"} err="failed to get container status \"7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2\": rpc error: code = NotFound desc = could not find container \"7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2\": container with ID starting with 7fb0f243f450cb89b2e377904842e2f5df5672895ac11f8bd06e2e69e36e84d2 not found: ID does not exist" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.392636 4858 scope.go:117] "RemoveContainer" containerID="9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17" Nov 22 08:19:16 crc kubenswrapper[4858]: E1122 08:19:16.393393 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17\": container with ID starting with 9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17 not found: ID does not exist" containerID="9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.393426 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17"} err="failed to get container status \"9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17\": rpc error: code = NotFound desc = could not find container \"9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17\": container with ID starting with 9d29a1e3e917cbd127d66212c3d23a6091a94a6f868a3f5c9451f16f3273dd17 not found: ID does not exist" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.393442 4858 scope.go:117] "RemoveContainer" containerID="aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751" Nov 22 08:19:16 crc kubenswrapper[4858]: E1122 08:19:16.394682 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751\": container with ID starting with aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751 not found: ID does not exist" containerID="aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.394830 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751"} err="failed to get container status \"aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751\": rpc error: code = NotFound desc = could not find container \"aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751\": container with ID starting with aacaead70740c64bf3effb132db3511759b7d6f6c869c995049b8fb76467e751 not found: ID does not exist" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.784264 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19a25a9f-6f87-47b1-bbb0-b85211b61266" (UID: "19a25a9f-6f87-47b1-bbb0-b85211b61266"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.873454 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a25a9f-6f87-47b1-bbb0-b85211b61266-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.918113 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5x69k"] Nov 22 08:19:16 crc kubenswrapper[4858]: I1122 08:19:16.925177 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5x69k"] Nov 22 08:19:17 crc kubenswrapper[4858]: I1122 08:19:17.546143 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" path="/var/lib/kubelet/pods/19a25a9f-6f87-47b1-bbb0-b85211b61266/volumes" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.256302 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wx2w"] Nov 22 08:20:40 crc kubenswrapper[4858]: E1122 08:20:40.257520 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="registry-server" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.257551 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="registry-server" Nov 22 08:20:40 crc kubenswrapper[4858]: E1122 08:20:40.257577 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="extract-utilities" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.257587 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="extract-utilities" Nov 22 08:20:40 crc kubenswrapper[4858]: E1122 08:20:40.257611 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="extract-content" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.257619 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="extract-content" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.257851 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a25a9f-6f87-47b1-bbb0-b85211b61266" containerName="registry-server" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.259352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.275072 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wx2w"] Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.313766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm47b\" (UniqueName: \"kubernetes.io/projected/d4e7a189-fbab-4de0-8956-3b5bd786ebed-kube-api-access-sm47b\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.313875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-catalog-content\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.313904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-utilities\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.415853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm47b\" (UniqueName: \"kubernetes.io/projected/d4e7a189-fbab-4de0-8956-3b5bd786ebed-kube-api-access-sm47b\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.415979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-catalog-content\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.416012 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-utilities\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.416739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-utilities\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.416757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-catalog-content\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.442114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm47b\" (UniqueName: \"kubernetes.io/projected/d4e7a189-fbab-4de0-8956-3b5bd786ebed-kube-api-access-sm47b\") pod \"certified-operators-9wx2w\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:40 crc kubenswrapper[4858]: I1122 08:20:40.582882 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:41 crc kubenswrapper[4858]: I1122 08:20:41.135452 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wx2w"] Nov 22 08:20:41 crc kubenswrapper[4858]: I1122 08:20:41.989047 4858 generic.go:334] "Generic (PLEG): container finished" podID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerID="010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673" exitCode=0 Nov 22 08:20:41 crc kubenswrapper[4858]: I1122 08:20:41.989218 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wx2w" event={"ID":"d4e7a189-fbab-4de0-8956-3b5bd786ebed","Type":"ContainerDied","Data":"010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673"} Nov 22 08:20:41 crc kubenswrapper[4858]: I1122 08:20:41.990529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wx2w" event={"ID":"d4e7a189-fbab-4de0-8956-3b5bd786ebed","Type":"ContainerStarted","Data":"7b5901a7de7593c717748ec0b598766ae81354eebac8d08d6faabf846a14c58c"} Nov 22 08:20:41 crc kubenswrapper[4858]: I1122 08:20:41.996981 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:20:44 crc kubenswrapper[4858]: I1122 08:20:44.010533 4858 generic.go:334] "Generic (PLEG): container finished" podID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerID="18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338" exitCode=0 Nov 22 08:20:44 crc kubenswrapper[4858]: I1122 08:20:44.010715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wx2w" event={"ID":"d4e7a189-fbab-4de0-8956-3b5bd786ebed","Type":"ContainerDied","Data":"18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338"} Nov 22 08:20:45 crc kubenswrapper[4858]: I1122 08:20:45.022553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wx2w" event={"ID":"d4e7a189-fbab-4de0-8956-3b5bd786ebed","Type":"ContainerStarted","Data":"9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3"} Nov 22 08:20:45 crc kubenswrapper[4858]: I1122 08:20:45.049677 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wx2w" podStartSLOduration=2.512022033 podStartE2EDuration="5.049647808s" podCreationTimestamp="2025-11-22 08:20:40 +0000 UTC" firstStartedPulling="2025-11-22 08:20:41.996651915 +0000 UTC m=+4203.838074921" lastFinishedPulling="2025-11-22 08:20:44.53427769 +0000 UTC m=+4206.375700696" observedRunningTime="2025-11-22 08:20:45.042297693 +0000 UTC m=+4206.883720719" watchObservedRunningTime="2025-11-22 08:20:45.049647808 +0000 UTC m=+4206.891070814" Nov 22 08:20:45 crc kubenswrapper[4858]: I1122 08:20:45.313073 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:20:45 crc kubenswrapper[4858]: I1122 08:20:45.313152 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:20:50 crc kubenswrapper[4858]: I1122 08:20:50.583444 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:50 crc kubenswrapper[4858]: I1122 08:20:50.584105 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:50 crc kubenswrapper[4858]: I1122 08:20:50.626913 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:51 crc kubenswrapper[4858]: I1122 08:20:51.115806 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:51 crc kubenswrapper[4858]: I1122 08:20:51.164839 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wx2w"] Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.089708 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wx2w" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="registry-server" containerID="cri-o://9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3" gracePeriod=2 Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.502974 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.560851 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm47b\" (UniqueName: \"kubernetes.io/projected/d4e7a189-fbab-4de0-8956-3b5bd786ebed-kube-api-access-sm47b\") pod \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.560942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-catalog-content\") pod \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.561024 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-utilities\") pod \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\" (UID: \"d4e7a189-fbab-4de0-8956-3b5bd786ebed\") " Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.562639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-utilities" (OuterVolumeSpecName: "utilities") pod "d4e7a189-fbab-4de0-8956-3b5bd786ebed" (UID: "d4e7a189-fbab-4de0-8956-3b5bd786ebed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.571976 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e7a189-fbab-4de0-8956-3b5bd786ebed-kube-api-access-sm47b" (OuterVolumeSpecName: "kube-api-access-sm47b") pod "d4e7a189-fbab-4de0-8956-3b5bd786ebed" (UID: "d4e7a189-fbab-4de0-8956-3b5bd786ebed"). InnerVolumeSpecName "kube-api-access-sm47b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.623821 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4e7a189-fbab-4de0-8956-3b5bd786ebed" (UID: "d4e7a189-fbab-4de0-8956-3b5bd786ebed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.662944 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.663276 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm47b\" (UniqueName: \"kubernetes.io/projected/d4e7a189-fbab-4de0-8956-3b5bd786ebed-kube-api-access-sm47b\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:53 crc kubenswrapper[4858]: I1122 08:20:53.663417 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e7a189-fbab-4de0-8956-3b5bd786ebed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.102654 4858 generic.go:334] "Generic (PLEG): container finished" podID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerID="9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3" exitCode=0 Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.102719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wx2w" event={"ID":"d4e7a189-fbab-4de0-8956-3b5bd786ebed","Type":"ContainerDied","Data":"9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3"} Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.102770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wx2w" event={"ID":"d4e7a189-fbab-4de0-8956-3b5bd786ebed","Type":"ContainerDied","Data":"7b5901a7de7593c717748ec0b598766ae81354eebac8d08d6faabf846a14c58c"} Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.102767 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wx2w" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.102797 4858 scope.go:117] "RemoveContainer" containerID="9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.138255 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wx2w"] Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.142190 4858 scope.go:117] "RemoveContainer" containerID="18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.143468 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wx2w"] Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.166895 4858 scope.go:117] "RemoveContainer" containerID="010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.189654 4858 scope.go:117] "RemoveContainer" containerID="9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3" Nov 22 08:20:54 crc kubenswrapper[4858]: E1122 08:20:54.190500 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3\": container with ID starting with 9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3 not found: ID does not exist" containerID="9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.190585 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3"} err="failed to get container status \"9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3\": rpc error: code = NotFound desc = could not find container \"9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3\": container with ID starting with 9a4acc9975e6ba03e60b8d0d74b26313f59ea8eec9cf91c914b93dc14e7bfed3 not found: ID does not exist" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.190643 4858 scope.go:117] "RemoveContainer" containerID="18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338" Nov 22 08:20:54 crc kubenswrapper[4858]: E1122 08:20:54.192436 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338\": container with ID starting with 18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338 not found: ID does not exist" containerID="18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.192567 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338"} err="failed to get container status \"18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338\": rpc error: code = NotFound desc = could not find container \"18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338\": container with ID starting with 18bd554848dfe78884d5ee8837180b33bfb3447e6222d2082742b56d2b2bb338 not found: ID does not exist" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.192701 4858 scope.go:117] "RemoveContainer" containerID="010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673" Nov 22 08:20:54 crc kubenswrapper[4858]: E1122 08:20:54.193721 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673\": container with ID starting with 010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673 not found: ID does not exist" containerID="010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673" Nov 22 08:20:54 crc kubenswrapper[4858]: I1122 08:20:54.193800 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673"} err="failed to get container status \"010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673\": rpc error: code = NotFound desc = could not find container \"010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673\": container with ID starting with 010db084045cc932b0cb8b3e0f8128d0eabd28a224ec36ffa18c5b30f68f5673 not found: ID does not exist" Nov 22 08:20:55 crc kubenswrapper[4858]: I1122 08:20:55.545880 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" path="/var/lib/kubelet/pods/d4e7a189-fbab-4de0-8956-3b5bd786ebed/volumes" Nov 22 08:21:15 crc kubenswrapper[4858]: I1122 08:21:15.312040 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:21:15 crc kubenswrapper[4858]: I1122 08:21:15.313107 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.312380 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.313079 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.313127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.313861 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3df7838fa67a1d57097e2056d897b19816f7aaa1e83b353834835bfa3131d6f9"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.313933 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://3df7838fa67a1d57097e2056d897b19816f7aaa1e83b353834835bfa3131d6f9" gracePeriod=600 Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.532003 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="3df7838fa67a1d57097e2056d897b19816f7aaa1e83b353834835bfa3131d6f9" exitCode=0 Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.532066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"3df7838fa67a1d57097e2056d897b19816f7aaa1e83b353834835bfa3131d6f9"} Nov 22 08:21:45 crc kubenswrapper[4858]: I1122 08:21:45.532120 4858 scope.go:117] "RemoveContainer" containerID="b63b0f3f39dc6254889689896f34a1b755fb1d3c22dc6b5189af4f260fa7e894" Nov 22 08:21:46 crc kubenswrapper[4858]: I1122 08:21:46.541474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5"} Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.483027 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4x6ft"] Nov 22 08:22:58 crc kubenswrapper[4858]: E1122 08:22:58.484023 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="registry-server" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.484037 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="registry-server" Nov 22 08:22:58 crc kubenswrapper[4858]: E1122 08:22:58.484054 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="extract-utilities" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.484060 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="extract-utilities" Nov 22 08:22:58 crc kubenswrapper[4858]: E1122 08:22:58.484082 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="extract-content" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.484088 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="extract-content" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.484239 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e7a189-fbab-4de0-8956-3b5bd786ebed" containerName="registry-server" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.485378 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.501355 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x6ft"] Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.661762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-utilities\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.661809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-catalog-content\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.662022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft46t\" (UniqueName: \"kubernetes.io/projected/c6d4267c-1ab6-4092-9610-2454d29c9f2e-kube-api-access-ft46t\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.763261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft46t\" (UniqueName: \"kubernetes.io/projected/c6d4267c-1ab6-4092-9610-2454d29c9f2e-kube-api-access-ft46t\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.763357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-catalog-content\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.763379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-utilities\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.763868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-utilities\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.763993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-catalog-content\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.786278 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft46t\" (UniqueName: \"kubernetes.io/projected/c6d4267c-1ab6-4092-9610-2454d29c9f2e-kube-api-access-ft46t\") pod \"redhat-marketplace-4x6ft\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:58 crc kubenswrapper[4858]: I1122 08:22:58.806785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:22:59 crc kubenswrapper[4858]: I1122 08:22:59.303406 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x6ft"] Nov 22 08:23:00 crc kubenswrapper[4858]: I1122 08:23:00.071498 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerID="fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b" exitCode=0 Nov 22 08:23:00 crc kubenswrapper[4858]: I1122 08:23:00.071563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x6ft" event={"ID":"c6d4267c-1ab6-4092-9610-2454d29c9f2e","Type":"ContainerDied","Data":"fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b"} Nov 22 08:23:00 crc kubenswrapper[4858]: I1122 08:23:00.072809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x6ft" event={"ID":"c6d4267c-1ab6-4092-9610-2454d29c9f2e","Type":"ContainerStarted","Data":"ae284759c83dfbf235f84049f02b7e093613a2d1983980917f72ab84c71d3b0d"} Nov 22 08:23:03 crc kubenswrapper[4858]: I1122 08:23:03.095461 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerID="0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b" exitCode=0 Nov 22 08:23:03 crc kubenswrapper[4858]: I1122 08:23:03.095623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x6ft" event={"ID":"c6d4267c-1ab6-4092-9610-2454d29c9f2e","Type":"ContainerDied","Data":"0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b"} Nov 22 08:23:07 crc kubenswrapper[4858]: I1122 08:23:07.124366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x6ft" event={"ID":"c6d4267c-1ab6-4092-9610-2454d29c9f2e","Type":"ContainerStarted","Data":"3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a"} Nov 22 08:23:07 crc kubenswrapper[4858]: I1122 08:23:07.148609 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4x6ft" podStartSLOduration=2.76733031 podStartE2EDuration="9.148579779s" podCreationTimestamp="2025-11-22 08:22:58 +0000 UTC" firstStartedPulling="2025-11-22 08:23:00.073512829 +0000 UTC m=+4341.914935855" lastFinishedPulling="2025-11-22 08:23:06.454762318 +0000 UTC m=+4348.296185324" observedRunningTime="2025-11-22 08:23:07.14581896 +0000 UTC m=+4348.987241976" watchObservedRunningTime="2025-11-22 08:23:07.148579779 +0000 UTC m=+4348.990002785" Nov 22 08:23:08 crc kubenswrapper[4858]: I1122 08:23:08.807036 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:23:08 crc kubenswrapper[4858]: I1122 08:23:08.807087 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:23:08 crc kubenswrapper[4858]: I1122 08:23:08.850134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:23:18 crc kubenswrapper[4858]: I1122 08:23:18.848841 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:23:18 crc kubenswrapper[4858]: I1122 08:23:18.900014 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x6ft"] Nov 22 08:23:19 crc kubenswrapper[4858]: I1122 08:23:19.207298 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4x6ft" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="registry-server" containerID="cri-o://3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a" gracePeriod=2 Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.142254 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.217912 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerID="3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a" exitCode=0 Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.217959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x6ft" event={"ID":"c6d4267c-1ab6-4092-9610-2454d29c9f2e","Type":"ContainerDied","Data":"3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a"} Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.217985 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4x6ft" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.218005 4858 scope.go:117] "RemoveContainer" containerID="3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.217993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4x6ft" event={"ID":"c6d4267c-1ab6-4092-9610-2454d29c9f2e","Type":"ContainerDied","Data":"ae284759c83dfbf235f84049f02b7e093613a2d1983980917f72ab84c71d3b0d"} Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.245981 4858 scope.go:117] "RemoveContainer" containerID="0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.264510 4858 scope.go:117] "RemoveContainer" containerID="fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.275287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-utilities\") pod \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.275632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft46t\" (UniqueName: \"kubernetes.io/projected/c6d4267c-1ab6-4092-9610-2454d29c9f2e-kube-api-access-ft46t\") pod \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.275787 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-catalog-content\") pod \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\" (UID: \"c6d4267c-1ab6-4092-9610-2454d29c9f2e\") " Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.276533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-utilities" (OuterVolumeSpecName: "utilities") pod "c6d4267c-1ab6-4092-9610-2454d29c9f2e" (UID: "c6d4267c-1ab6-4092-9610-2454d29c9f2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.283057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d4267c-1ab6-4092-9610-2454d29c9f2e-kube-api-access-ft46t" (OuterVolumeSpecName: "kube-api-access-ft46t") pod "c6d4267c-1ab6-4092-9610-2454d29c9f2e" (UID: "c6d4267c-1ab6-4092-9610-2454d29c9f2e"). InnerVolumeSpecName "kube-api-access-ft46t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.293911 4858 scope.go:117] "RemoveContainer" containerID="3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a" Nov 22 08:23:20 crc kubenswrapper[4858]: E1122 08:23:20.296371 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a\": container with ID starting with 3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a not found: ID does not exist" containerID="3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.296471 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a"} err="failed to get container status \"3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a\": rpc error: code = NotFound desc = could not find container \"3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a\": container with ID starting with 3c90285962f57d3ebc9d1ba985de68cd91bf24c75828d66214a38a5e21c0d64a not found: ID does not exist" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.296501 4858 scope.go:117] "RemoveContainer" containerID="0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.297157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6d4267c-1ab6-4092-9610-2454d29c9f2e" (UID: "c6d4267c-1ab6-4092-9610-2454d29c9f2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:23:20 crc kubenswrapper[4858]: E1122 08:23:20.298779 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b\": container with ID starting with 0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b not found: ID does not exist" containerID="0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.298821 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b"} err="failed to get container status \"0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b\": rpc error: code = NotFound desc = could not find container \"0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b\": container with ID starting with 0186b2087a29a8836887bca157f3107dcd04a017ca7125adaa6b79a59b359d6b not found: ID does not exist" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.298847 4858 scope.go:117] "RemoveContainer" containerID="fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b" Nov 22 08:23:20 crc kubenswrapper[4858]: E1122 08:23:20.299893 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b\": container with ID starting with fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b not found: ID does not exist" containerID="fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.300046 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b"} err="failed to get container status \"fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b\": rpc error: code = NotFound desc = could not find container \"fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b\": container with ID starting with fad693bcc25cf9d2bd47f3b4870e51c94e1b9b2919eccadbe891532d3189934b not found: ID does not exist" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.377144 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft46t\" (UniqueName: \"kubernetes.io/projected/c6d4267c-1ab6-4092-9610-2454d29c9f2e-kube-api-access-ft46t\") on node \"crc\" DevicePath \"\"" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.377186 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.377196 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d4267c-1ab6-4092-9610-2454d29c9f2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.563817 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x6ft"] Nov 22 08:23:20 crc kubenswrapper[4858]: I1122 08:23:20.568975 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4x6ft"] Nov 22 08:23:21 crc kubenswrapper[4858]: I1122 08:23:21.546836 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" path="/var/lib/kubelet/pods/c6d4267c-1ab6-4092-9610-2454d29c9f2e/volumes" Nov 22 08:23:45 crc kubenswrapper[4858]: I1122 08:23:45.312365 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:23:45 crc kubenswrapper[4858]: I1122 08:23:45.312963 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:24:15 crc kubenswrapper[4858]: I1122 08:24:15.312836 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:24:15 crc kubenswrapper[4858]: I1122 08:24:15.313444 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.312635 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.314976 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.315139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.316032 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.316212 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" gracePeriod=600 Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.928099 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" exitCode=0 Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.928197 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5"} Nov 22 08:24:45 crc kubenswrapper[4858]: I1122 08:24:45.928549 4858 scope.go:117] "RemoveContainer" containerID="3df7838fa67a1d57097e2056d897b19816f7aaa1e83b353834835bfa3131d6f9" Nov 22 08:24:46 crc kubenswrapper[4858]: E1122 08:24:46.100020 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:24:46 crc kubenswrapper[4858]: I1122 08:24:46.941573 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:24:46 crc kubenswrapper[4858]: E1122 08:24:46.941873 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:24:59 crc kubenswrapper[4858]: I1122 08:24:59.542614 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:24:59 crc kubenswrapper[4858]: E1122 08:24:59.544076 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:25:14 crc kubenswrapper[4858]: I1122 08:25:14.535576 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:25:14 crc kubenswrapper[4858]: E1122 08:25:14.536360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:25:25 crc kubenswrapper[4858]: I1122 08:25:25.536301 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:25:25 crc kubenswrapper[4858]: E1122 08:25:25.536975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:25:38 crc kubenswrapper[4858]: I1122 08:25:38.535928 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:25:38 crc kubenswrapper[4858]: E1122 08:25:38.536985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:25:49 crc kubenswrapper[4858]: I1122 08:25:49.540180 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:25:49 crc kubenswrapper[4858]: E1122 08:25:49.541096 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:26:04 crc kubenswrapper[4858]: I1122 08:26:04.535523 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:26:04 crc kubenswrapper[4858]: E1122 08:26:04.536289 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:26:17 crc kubenswrapper[4858]: I1122 08:26:17.535901 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:26:17 crc kubenswrapper[4858]: E1122 08:26:17.536768 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:26:30 crc kubenswrapper[4858]: I1122 08:26:30.536232 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:26:30 crc kubenswrapper[4858]: E1122 08:26:30.538360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.888446 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jl5j2"] Nov 22 08:26:39 crc kubenswrapper[4858]: E1122 08:26:39.889416 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="extract-content" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.889433 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="extract-content" Nov 22 08:26:39 crc kubenswrapper[4858]: E1122 08:26:39.889447 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="registry-server" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.889454 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="registry-server" Nov 22 08:26:39 crc kubenswrapper[4858]: E1122 08:26:39.889477 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="extract-utilities" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.889487 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="extract-utilities" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.889674 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d4267c-1ab6-4092-9610-2454d29c9f2e" containerName="registry-server" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.890855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.903829 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl5j2"] Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.993140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ps4k\" (UniqueName: \"kubernetes.io/projected/980b7f89-f9bf-4e21-9790-faadec5ac54b-kube-api-access-9ps4k\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.993247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-utilities\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:39 crc kubenswrapper[4858]: I1122 08:26:39.993304 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-catalog-content\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.095202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ps4k\" (UniqueName: \"kubernetes.io/projected/980b7f89-f9bf-4e21-9790-faadec5ac54b-kube-api-access-9ps4k\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.095294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-utilities\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.095368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-catalog-content\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.095963 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-catalog-content\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.096597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-utilities\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.117474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ps4k\" (UniqueName: \"kubernetes.io/projected/980b7f89-f9bf-4e21-9790-faadec5ac54b-kube-api-access-9ps4k\") pod \"redhat-operators-jl5j2\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.219028 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.695348 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl5j2"] Nov 22 08:26:40 crc kubenswrapper[4858]: I1122 08:26:40.770676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerStarted","Data":"59c22045ddbfb4ff965294eb7dcefd0aa8bc57223bbac0991eb4b77fe418d839"} Nov 22 08:26:41 crc kubenswrapper[4858]: I1122 08:26:41.778861 4858 generic.go:334] "Generic (PLEG): container finished" podID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerID="c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2" exitCode=0 Nov 22 08:26:41 crc kubenswrapper[4858]: I1122 08:26:41.778908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerDied","Data":"c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2"} Nov 22 08:26:41 crc kubenswrapper[4858]: I1122 08:26:41.780364 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:26:43 crc kubenswrapper[4858]: I1122 08:26:43.536511 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:26:43 crc kubenswrapper[4858]: E1122 08:26:43.537120 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:26:43 crc kubenswrapper[4858]: I1122 08:26:43.796466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerStarted","Data":"bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2"} Nov 22 08:26:44 crc kubenswrapper[4858]: I1122 08:26:44.806069 4858 generic.go:334] "Generic (PLEG): container finished" podID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerID="bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2" exitCode=0 Nov 22 08:26:44 crc kubenswrapper[4858]: I1122 08:26:44.806131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerDied","Data":"bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2"} Nov 22 08:26:46 crc kubenswrapper[4858]: I1122 08:26:46.826415 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerStarted","Data":"6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34"} Nov 22 08:26:46 crc kubenswrapper[4858]: I1122 08:26:46.854910 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jl5j2" podStartSLOduration=3.692502794 podStartE2EDuration="7.854893231s" podCreationTimestamp="2025-11-22 08:26:39 +0000 UTC" firstStartedPulling="2025-11-22 08:26:41.780104116 +0000 UTC m=+4563.621527122" lastFinishedPulling="2025-11-22 08:26:45.942494553 +0000 UTC m=+4567.783917559" observedRunningTime="2025-11-22 08:26:46.854429826 +0000 UTC m=+4568.695852842" watchObservedRunningTime="2025-11-22 08:26:46.854893231 +0000 UTC m=+4568.696316237" Nov 22 08:26:50 crc kubenswrapper[4858]: I1122 08:26:50.219847 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:50 crc kubenswrapper[4858]: I1122 08:26:50.221118 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:26:51 crc kubenswrapper[4858]: I1122 08:26:51.257608 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jl5j2" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="registry-server" probeResult="failure" output=< Nov 22 08:26:51 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 08:26:51 crc kubenswrapper[4858]: > Nov 22 08:26:57 crc kubenswrapper[4858]: I1122 08:26:57.535649 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:26:57 crc kubenswrapper[4858]: E1122 08:26:57.536400 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:27:00 crc kubenswrapper[4858]: I1122 08:27:00.259817 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:27:00 crc kubenswrapper[4858]: I1122 08:27:00.310078 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:27:00 crc kubenswrapper[4858]: I1122 08:27:00.500060 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl5j2"] Nov 22 08:27:01 crc kubenswrapper[4858]: I1122 08:27:01.940144 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jl5j2" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="registry-server" containerID="cri-o://6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34" gracePeriod=2 Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.336173 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.450504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-catalog-content\") pod \"980b7f89-f9bf-4e21-9790-faadec5ac54b\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.450648 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ps4k\" (UniqueName: \"kubernetes.io/projected/980b7f89-f9bf-4e21-9790-faadec5ac54b-kube-api-access-9ps4k\") pod \"980b7f89-f9bf-4e21-9790-faadec5ac54b\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.450744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-utilities\") pod \"980b7f89-f9bf-4e21-9790-faadec5ac54b\" (UID: \"980b7f89-f9bf-4e21-9790-faadec5ac54b\") " Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.451560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-utilities" (OuterVolumeSpecName: "utilities") pod "980b7f89-f9bf-4e21-9790-faadec5ac54b" (UID: "980b7f89-f9bf-4e21-9790-faadec5ac54b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.455553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980b7f89-f9bf-4e21-9790-faadec5ac54b-kube-api-access-9ps4k" (OuterVolumeSpecName: "kube-api-access-9ps4k") pod "980b7f89-f9bf-4e21-9790-faadec5ac54b" (UID: "980b7f89-f9bf-4e21-9790-faadec5ac54b"). InnerVolumeSpecName "kube-api-access-9ps4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.552135 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ps4k\" (UniqueName: \"kubernetes.io/projected/980b7f89-f9bf-4e21-9790-faadec5ac54b-kube-api-access-9ps4k\") on node \"crc\" DevicePath \"\"" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.552191 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.559789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "980b7f89-f9bf-4e21-9790-faadec5ac54b" (UID: "980b7f89-f9bf-4e21-9790-faadec5ac54b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.655222 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980b7f89-f9bf-4e21-9790-faadec5ac54b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.947748 4858 generic.go:334] "Generic (PLEG): container finished" podID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerID="6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34" exitCode=0 Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.947803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerDied","Data":"6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34"} Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.947840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl5j2" event={"ID":"980b7f89-f9bf-4e21-9790-faadec5ac54b","Type":"ContainerDied","Data":"59c22045ddbfb4ff965294eb7dcefd0aa8bc57223bbac0991eb4b77fe418d839"} Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.947860 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl5j2" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.947898 4858 scope.go:117] "RemoveContainer" containerID="6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.981577 4858 scope.go:117] "RemoveContainer" containerID="bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2" Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.985052 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl5j2"] Nov 22 08:27:02 crc kubenswrapper[4858]: I1122 08:27:02.995272 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jl5j2"] Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.010301 4858 scope.go:117] "RemoveContainer" containerID="c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.033999 4858 scope.go:117] "RemoveContainer" containerID="6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34" Nov 22 08:27:03 crc kubenswrapper[4858]: E1122 08:27:03.034751 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34\": container with ID starting with 6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34 not found: ID does not exist" containerID="6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.034818 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34"} err="failed to get container status \"6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34\": rpc error: code = NotFound desc = could not find container \"6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34\": container with ID starting with 6330e1d271e5c220e8d1e51cf797adc332c3c62bca82bcf0be517c9a6c618c34 not found: ID does not exist" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.034862 4858 scope.go:117] "RemoveContainer" containerID="bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2" Nov 22 08:27:03 crc kubenswrapper[4858]: E1122 08:27:03.036151 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2\": container with ID starting with bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2 not found: ID does not exist" containerID="bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.036194 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2"} err="failed to get container status \"bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2\": rpc error: code = NotFound desc = could not find container \"bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2\": container with ID starting with bf32cfa4a67fb9ecc48fcd6e366c16b842914c4fccef9ab21cb40e392c2af5d2 not found: ID does not exist" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.036224 4858 scope.go:117] "RemoveContainer" containerID="c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2" Nov 22 08:27:03 crc kubenswrapper[4858]: E1122 08:27:03.036999 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2\": container with ID starting with c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2 not found: ID does not exist" containerID="c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.037028 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2"} err="failed to get container status \"c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2\": rpc error: code = NotFound desc = could not find container \"c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2\": container with ID starting with c7e886c52d88c2c532edb5078c75514ecdfc5d4c5af479bd9c7cf21589ff4bb2 not found: ID does not exist" Nov 22 08:27:03 crc kubenswrapper[4858]: I1122 08:27:03.545268 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" path="/var/lib/kubelet/pods/980b7f89-f9bf-4e21-9790-faadec5ac54b/volumes" Nov 22 08:27:12 crc kubenswrapper[4858]: I1122 08:27:12.536640 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:27:12 crc kubenswrapper[4858]: E1122 08:27:12.537640 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:27:25 crc kubenswrapper[4858]: I1122 08:27:25.537100 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:27:25 crc kubenswrapper[4858]: E1122 08:27:25.540727 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:27:40 crc kubenswrapper[4858]: I1122 08:27:40.535979 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:27:40 crc kubenswrapper[4858]: E1122 08:27:40.538729 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:27:51 crc kubenswrapper[4858]: I1122 08:27:51.536579 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:27:51 crc kubenswrapper[4858]: E1122 08:27:51.537450 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:28:02 crc kubenswrapper[4858]: I1122 08:28:02.536062 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:28:02 crc kubenswrapper[4858]: E1122 08:28:02.536864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:28:16 crc kubenswrapper[4858]: I1122 08:28:16.536251 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:28:16 crc kubenswrapper[4858]: E1122 08:28:16.537021 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:28:30 crc kubenswrapper[4858]: I1122 08:28:30.535985 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:28:30 crc kubenswrapper[4858]: E1122 08:28:30.536991 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:28:44 crc kubenswrapper[4858]: I1122 08:28:44.536044 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:28:44 crc kubenswrapper[4858]: E1122 08:28:44.537119 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:28:55 crc kubenswrapper[4858]: I1122 08:28:55.535697 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:28:55 crc kubenswrapper[4858]: E1122 08:28:55.536925 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:29:08 crc kubenswrapper[4858]: I1122 08:29:08.535685 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:29:08 crc kubenswrapper[4858]: E1122 08:29:08.536548 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:29:21 crc kubenswrapper[4858]: I1122 08:29:21.535958 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:29:21 crc kubenswrapper[4858]: E1122 08:29:21.536875 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:29:34 crc kubenswrapper[4858]: I1122 08:29:34.535575 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:29:34 crc kubenswrapper[4858]: E1122 08:29:34.536227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:29:48 crc kubenswrapper[4858]: I1122 08:29:48.536267 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:29:49 crc kubenswrapper[4858]: I1122 08:29:49.149936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"a2cceb6e469d2b3bed09fa4ebb4d9670f804c3be0e924f74415dd1c1b606909c"} Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.156110 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7"] Nov 22 08:30:00 crc kubenswrapper[4858]: E1122 08:30:00.157099 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="extract-content" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.157118 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="extract-content" Nov 22 08:30:00 crc kubenswrapper[4858]: E1122 08:30:00.157145 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="registry-server" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.157154 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="registry-server" Nov 22 08:30:00 crc kubenswrapper[4858]: E1122 08:30:00.157180 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="extract-utilities" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.157188 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="extract-utilities" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.157396 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="980b7f89-f9bf-4e21-9790-faadec5ac54b" containerName="registry-server" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.157960 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.160573 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.160673 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.162412 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7"] Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.301444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gdfx\" (UniqueName: \"kubernetes.io/projected/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-kube-api-access-9gdfx\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.301834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-config-volume\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.301886 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-secret-volume\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.404068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gdfx\" (UniqueName: \"kubernetes.io/projected/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-kube-api-access-9gdfx\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.404223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-config-volume\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.404287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-secret-volume\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.405311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-config-volume\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.429448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-secret-volume\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.431726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gdfx\" (UniqueName: \"kubernetes.io/projected/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-kube-api-access-9gdfx\") pod \"collect-profiles-29396670-s4pq7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.490930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:00 crc kubenswrapper[4858]: I1122 08:30:00.916558 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7"] Nov 22 08:30:00 crc kubenswrapper[4858]: W1122 08:30:00.921454 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c0d7c96_7624_4ac8_b72e_7536a0fb25b7.slice/crio-352cbfc30ad896ecdc07acf3a78ad5b44459eab22f72e7a4803aaf7906cfa311 WatchSource:0}: Error finding container 352cbfc30ad896ecdc07acf3a78ad5b44459eab22f72e7a4803aaf7906cfa311: Status 404 returned error can't find the container with id 352cbfc30ad896ecdc07acf3a78ad5b44459eab22f72e7a4803aaf7906cfa311 Nov 22 08:30:01 crc kubenswrapper[4858]: I1122 08:30:01.236282 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" event={"ID":"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7","Type":"ContainerStarted","Data":"ad38c91fbcc2fb98dad6e27e00989dd7e079e72f0d4f05a28603bcb2da534b2b"} Nov 22 08:30:01 crc kubenswrapper[4858]: I1122 08:30:01.236347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" event={"ID":"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7","Type":"ContainerStarted","Data":"352cbfc30ad896ecdc07acf3a78ad5b44459eab22f72e7a4803aaf7906cfa311"} Nov 22 08:30:01 crc kubenswrapper[4858]: I1122 08:30:01.267264 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" podStartSLOduration=1.267242 podStartE2EDuration="1.267242s" podCreationTimestamp="2025-11-22 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:30:01.265416012 +0000 UTC m=+4763.106839038" watchObservedRunningTime="2025-11-22 08:30:01.267242 +0000 UTC m=+4763.108665026" Nov 22 08:30:02 crc kubenswrapper[4858]: I1122 08:30:02.245094 4858 generic.go:334] "Generic (PLEG): container finished" podID="4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" containerID="ad38c91fbcc2fb98dad6e27e00989dd7e079e72f0d4f05a28603bcb2da534b2b" exitCode=0 Nov 22 08:30:02 crc kubenswrapper[4858]: I1122 08:30:02.245206 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" event={"ID":"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7","Type":"ContainerDied","Data":"ad38c91fbcc2fb98dad6e27e00989dd7e079e72f0d4f05a28603bcb2da534b2b"} Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.513439 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.647351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-config-volume\") pod \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.647458 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-secret-volume\") pod \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.647496 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gdfx\" (UniqueName: \"kubernetes.io/projected/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-kube-api-access-9gdfx\") pod \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\" (UID: \"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7\") " Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.648282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" (UID: "4c0d7c96-7624-4ac8-b72e-7536a0fb25b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.653429 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" (UID: "4c0d7c96-7624-4ac8-b72e-7536a0fb25b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.653560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-kube-api-access-9gdfx" (OuterVolumeSpecName: "kube-api-access-9gdfx") pod "4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" (UID: "4c0d7c96-7624-4ac8-b72e-7536a0fb25b7"). InnerVolumeSpecName "kube-api-access-9gdfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.748919 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.748963 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:03 crc kubenswrapper[4858]: I1122 08:30:03.748974 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gdfx\" (UniqueName: \"kubernetes.io/projected/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7-kube-api-access-9gdfx\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:04 crc kubenswrapper[4858]: I1122 08:30:04.259637 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" event={"ID":"4c0d7c96-7624-4ac8-b72e-7536a0fb25b7","Type":"ContainerDied","Data":"352cbfc30ad896ecdc07acf3a78ad5b44459eab22f72e7a4803aaf7906cfa311"} Nov 22 08:30:04 crc kubenswrapper[4858]: I1122 08:30:04.259697 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352cbfc30ad896ecdc07acf3a78ad5b44459eab22f72e7a4803aaf7906cfa311" Nov 22 08:30:04 crc kubenswrapper[4858]: I1122 08:30:04.259699 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7" Nov 22 08:30:04 crc kubenswrapper[4858]: I1122 08:30:04.594705 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk"] Nov 22 08:30:04 crc kubenswrapper[4858]: I1122 08:30:04.599749 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-kjqjk"] Nov 22 08:30:05 crc kubenswrapper[4858]: I1122 08:30:05.544677 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c" path="/var/lib/kubelet/pods/6afba2ba-4cf2-4450-aaac-d7dfe4d4da8c/volumes" Nov 22 08:30:32 crc kubenswrapper[4858]: I1122 08:30:32.443512 4858 scope.go:117] "RemoveContainer" containerID="4784e527139c0194e9c1959438eb39267fdc06b2e335446f473a9c52b341697d" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.500566 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9l6w"] Nov 22 08:31:47 crc kubenswrapper[4858]: E1122 08:31:47.501931 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" containerName="collect-profiles" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.501955 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" containerName="collect-profiles" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.502143 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" containerName="collect-profiles" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.505798 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.506265 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9l6w"] Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.573757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-catalog-content\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.573825 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-utilities\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.573918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qds\" (UniqueName: \"kubernetes.io/projected/c003d640-1dac-4b1f-84d7-d30fc992d3e9-kube-api-access-s8qds\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.675126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qds\" (UniqueName: \"kubernetes.io/projected/c003d640-1dac-4b1f-84d7-d30fc992d3e9-kube-api-access-s8qds\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.675254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-catalog-content\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.675313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-utilities\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.675933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-catalog-content\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.676047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-utilities\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.700953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qds\" (UniqueName: \"kubernetes.io/projected/c003d640-1dac-4b1f-84d7-d30fc992d3e9-kube-api-access-s8qds\") pod \"certified-operators-s9l6w\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:47 crc kubenswrapper[4858]: I1122 08:31:47.826298 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:48 crc kubenswrapper[4858]: I1122 08:31:48.164285 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9l6w"] Nov 22 08:31:49 crc kubenswrapper[4858]: I1122 08:31:49.018552 4858 generic.go:334] "Generic (PLEG): container finished" podID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerID="57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162" exitCode=0 Nov 22 08:31:49 crc kubenswrapper[4858]: I1122 08:31:49.018677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9l6w" event={"ID":"c003d640-1dac-4b1f-84d7-d30fc992d3e9","Type":"ContainerDied","Data":"57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162"} Nov 22 08:31:49 crc kubenswrapper[4858]: I1122 08:31:49.018880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9l6w" event={"ID":"c003d640-1dac-4b1f-84d7-d30fc992d3e9","Type":"ContainerStarted","Data":"a1f26ef9e719c18816610e55bc70a58545e1cb29737b5234ee9f99567834b850"} Nov 22 08:31:49 crc kubenswrapper[4858]: I1122 08:31:49.020898 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:31:50 crc kubenswrapper[4858]: I1122 08:31:50.027590 4858 generic.go:334] "Generic (PLEG): container finished" podID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerID="dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965" exitCode=0 Nov 22 08:31:50 crc kubenswrapper[4858]: I1122 08:31:50.027681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9l6w" event={"ID":"c003d640-1dac-4b1f-84d7-d30fc992d3e9","Type":"ContainerDied","Data":"dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965"} Nov 22 08:31:51 crc kubenswrapper[4858]: I1122 08:31:51.041281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9l6w" event={"ID":"c003d640-1dac-4b1f-84d7-d30fc992d3e9","Type":"ContainerStarted","Data":"448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3"} Nov 22 08:31:51 crc kubenswrapper[4858]: I1122 08:31:51.071843 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9l6w" podStartSLOduration=2.641524426 podStartE2EDuration="4.071807092s" podCreationTimestamp="2025-11-22 08:31:47 +0000 UTC" firstStartedPulling="2025-11-22 08:31:49.020605129 +0000 UTC m=+4870.862028135" lastFinishedPulling="2025-11-22 08:31:50.450887785 +0000 UTC m=+4872.292310801" observedRunningTime="2025-11-22 08:31:51.062213935 +0000 UTC m=+4872.903636951" watchObservedRunningTime="2025-11-22 08:31:51.071807092 +0000 UTC m=+4872.913230118" Nov 22 08:31:57 crc kubenswrapper[4858]: I1122 08:31:57.826737 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:57 crc kubenswrapper[4858]: I1122 08:31:57.827251 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:57 crc kubenswrapper[4858]: I1122 08:31:57.873693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:58 crc kubenswrapper[4858]: I1122 08:31:58.140405 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:31:58 crc kubenswrapper[4858]: I1122 08:31:58.191554 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9l6w"] Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.109448 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9l6w" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="registry-server" containerID="cri-o://448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3" gracePeriod=2 Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.506519 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.562786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-utilities\") pod \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.562838 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8qds\" (UniqueName: \"kubernetes.io/projected/c003d640-1dac-4b1f-84d7-d30fc992d3e9-kube-api-access-s8qds\") pod \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.562861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-catalog-content\") pod \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\" (UID: \"c003d640-1dac-4b1f-84d7-d30fc992d3e9\") " Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.563805 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-utilities" (OuterVolumeSpecName: "utilities") pod "c003d640-1dac-4b1f-84d7-d30fc992d3e9" (UID: "c003d640-1dac-4b1f-84d7-d30fc992d3e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.568434 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c003d640-1dac-4b1f-84d7-d30fc992d3e9-kube-api-access-s8qds" (OuterVolumeSpecName: "kube-api-access-s8qds") pod "c003d640-1dac-4b1f-84d7-d30fc992d3e9" (UID: "c003d640-1dac-4b1f-84d7-d30fc992d3e9"). InnerVolumeSpecName "kube-api-access-s8qds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.664945 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:00 crc kubenswrapper[4858]: I1122 08:32:00.664981 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8qds\" (UniqueName: \"kubernetes.io/projected/c003d640-1dac-4b1f-84d7-d30fc992d3e9-kube-api-access-s8qds\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.121114 4858 generic.go:334] "Generic (PLEG): container finished" podID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerID="448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3" exitCode=0 Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.121166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9l6w" event={"ID":"c003d640-1dac-4b1f-84d7-d30fc992d3e9","Type":"ContainerDied","Data":"448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3"} Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.121201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9l6w" event={"ID":"c003d640-1dac-4b1f-84d7-d30fc992d3e9","Type":"ContainerDied","Data":"a1f26ef9e719c18816610e55bc70a58545e1cb29737b5234ee9f99567834b850"} Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.121226 4858 scope.go:117] "RemoveContainer" containerID="448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.121380 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9l6w" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.144280 4858 scope.go:117] "RemoveContainer" containerID="dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.167100 4858 scope.go:117] "RemoveContainer" containerID="57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.188466 4858 scope.go:117] "RemoveContainer" containerID="448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3" Nov 22 08:32:01 crc kubenswrapper[4858]: E1122 08:32:01.189269 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3\": container with ID starting with 448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3 not found: ID does not exist" containerID="448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.189335 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3"} err="failed to get container status \"448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3\": rpc error: code = NotFound desc = could not find container \"448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3\": container with ID starting with 448c4774ab153650f3c24d083843c1e27cb417d3ee7b73b93de3570bb3ae49e3 not found: ID does not exist" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.189370 4858 scope.go:117] "RemoveContainer" containerID="dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965" Nov 22 08:32:01 crc kubenswrapper[4858]: E1122 08:32:01.189850 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965\": container with ID starting with dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965 not found: ID does not exist" containerID="dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.189886 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965"} err="failed to get container status \"dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965\": rpc error: code = NotFound desc = could not find container \"dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965\": container with ID starting with dd094838545375050e39d938f90d3872275aac6000558211d7c65d871d310965 not found: ID does not exist" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.189908 4858 scope.go:117] "RemoveContainer" containerID="57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162" Nov 22 08:32:01 crc kubenswrapper[4858]: E1122 08:32:01.190170 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162\": container with ID starting with 57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162 not found: ID does not exist" containerID="57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.190203 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162"} err="failed to get container status \"57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162\": rpc error: code = NotFound desc = could not find container \"57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162\": container with ID starting with 57745b57234cb2bc977ebc8b2b23852b5ddd01af09775f9014ccd11aae647162 not found: ID does not exist" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.528575 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c003d640-1dac-4b1f-84d7-d30fc992d3e9" (UID: "c003d640-1dac-4b1f-84d7-d30fc992d3e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.578400 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c003d640-1dac-4b1f-84d7-d30fc992d3e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.749296 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9l6w"] Nov 22 08:32:01 crc kubenswrapper[4858]: I1122 08:32:01.756640 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9l6w"] Nov 22 08:32:03 crc kubenswrapper[4858]: I1122 08:32:03.543361 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" path="/var/lib/kubelet/pods/c003d640-1dac-4b1f-84d7-d30fc992d3e9/volumes" Nov 22 08:32:15 crc kubenswrapper[4858]: I1122 08:32:15.312048 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:32:15 crc kubenswrapper[4858]: I1122 08:32:15.312572 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:32:45 crc kubenswrapper[4858]: I1122 08:32:45.312223 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:32:45 crc kubenswrapper[4858]: I1122 08:32:45.312819 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.312467 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.313491 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.313587 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.314935 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2cceb6e469d2b3bed09fa4ebb4d9670f804c3be0e924f74415dd1c1b606909c"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.315046 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://a2cceb6e469d2b3bed09fa4ebb4d9670f804c3be0e924f74415dd1c1b606909c" gracePeriod=600 Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.644416 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="a2cceb6e469d2b3bed09fa4ebb4d9670f804c3be0e924f74415dd1c1b606909c" exitCode=0 Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.644465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"a2cceb6e469d2b3bed09fa4ebb4d9670f804c3be0e924f74415dd1c1b606909c"} Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.644778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148"} Nov 22 08:33:15 crc kubenswrapper[4858]: I1122 08:33:15.644810 4858 scope.go:117] "RemoveContainer" containerID="5539cc8fa383e7d79a16e3334ff328d1d2852ac71e771928d065e020bd1facc5" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.609888 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qsqbp"] Nov 22 08:33:21 crc kubenswrapper[4858]: E1122 08:33:21.610632 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="extract-utilities" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.610645 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="extract-utilities" Nov 22 08:33:21 crc kubenswrapper[4858]: E1122 08:33:21.610663 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="extract-content" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.610669 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="extract-content" Nov 22 08:33:21 crc kubenswrapper[4858]: E1122 08:33:21.610688 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="registry-server" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.610694 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="registry-server" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.610862 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c003d640-1dac-4b1f-84d7-d30fc992d3e9" containerName="registry-server" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.611987 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.624465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsqbp"] Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.739128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-utilities\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.739244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qq5\" (UniqueName: \"kubernetes.io/projected/7b3e5c66-5951-48be-90d9-b100a0921c3e-kube-api-access-r9qq5\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.739270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-catalog-content\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.806279 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8p2l4"] Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.808121 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.818140 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8p2l4"] Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.840604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9qq5\" (UniqueName: \"kubernetes.io/projected/7b3e5c66-5951-48be-90d9-b100a0921c3e-kube-api-access-r9qq5\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.840666 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-catalog-content\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.840720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-utilities\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.841282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-catalog-content\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.841350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-utilities\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.868425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9qq5\" (UniqueName: \"kubernetes.io/projected/7b3e5c66-5951-48be-90d9-b100a0921c3e-kube-api-access-r9qq5\") pod \"redhat-marketplace-qsqbp\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.935824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.942675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-catalog-content\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.942786 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-utilities\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:21 crc kubenswrapper[4858]: I1122 08:33:21.942889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4lx\" (UniqueName: \"kubernetes.io/projected/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-kube-api-access-gf4lx\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.044766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-utilities\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.044835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf4lx\" (UniqueName: \"kubernetes.io/projected/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-kube-api-access-gf4lx\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.044893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-catalog-content\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.045400 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-catalog-content\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.045599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-utilities\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.080491 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf4lx\" (UniqueName: \"kubernetes.io/projected/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-kube-api-access-gf4lx\") pod \"community-operators-8p2l4\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.124786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.216004 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsqbp"] Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.402751 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8p2l4"] Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.707898 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerID="2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71" exitCode=0 Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.707973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsqbp" event={"ID":"7b3e5c66-5951-48be-90d9-b100a0921c3e","Type":"ContainerDied","Data":"2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71"} Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.708001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsqbp" event={"ID":"7b3e5c66-5951-48be-90d9-b100a0921c3e","Type":"ContainerStarted","Data":"fefd7ad49fafc388cef816db498c5476bf3c6745372cf704fb6c73d05dad6f8f"} Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.709683 4858 generic.go:334] "Generic (PLEG): container finished" podID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerID="3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d" exitCode=0 Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.709712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2l4" event={"ID":"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85","Type":"ContainerDied","Data":"3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d"} Nov 22 08:33:22 crc kubenswrapper[4858]: I1122 08:33:22.709729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2l4" event={"ID":"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85","Type":"ContainerStarted","Data":"5fc2be450172640d7154962637d1ebfa2713f1c054ce2c725585e2eeae304d13"} Nov 22 08:33:24 crc kubenswrapper[4858]: I1122 08:33:24.724900 4858 generic.go:334] "Generic (PLEG): container finished" podID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerID="36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3" exitCode=0 Nov 22 08:33:24 crc kubenswrapper[4858]: I1122 08:33:24.724968 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2l4" event={"ID":"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85","Type":"ContainerDied","Data":"36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3"} Nov 22 08:33:24 crc kubenswrapper[4858]: I1122 08:33:24.727259 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerID="475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25" exitCode=0 Nov 22 08:33:24 crc kubenswrapper[4858]: I1122 08:33:24.727303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsqbp" event={"ID":"7b3e5c66-5951-48be-90d9-b100a0921c3e","Type":"ContainerDied","Data":"475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25"} Nov 22 08:33:25 crc kubenswrapper[4858]: I1122 08:33:25.737812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsqbp" event={"ID":"7b3e5c66-5951-48be-90d9-b100a0921c3e","Type":"ContainerStarted","Data":"6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db"} Nov 22 08:33:25 crc kubenswrapper[4858]: I1122 08:33:25.740026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2l4" event={"ID":"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85","Type":"ContainerStarted","Data":"fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9"} Nov 22 08:33:25 crc kubenswrapper[4858]: I1122 08:33:25.777892 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qsqbp" podStartSLOduration=2.357261502 podStartE2EDuration="4.777860801s" podCreationTimestamp="2025-11-22 08:33:21 +0000 UTC" firstStartedPulling="2025-11-22 08:33:22.709480655 +0000 UTC m=+4964.550903661" lastFinishedPulling="2025-11-22 08:33:25.130079954 +0000 UTC m=+4966.971502960" observedRunningTime="2025-11-22 08:33:25.770442743 +0000 UTC m=+4967.611865769" watchObservedRunningTime="2025-11-22 08:33:25.777860801 +0000 UTC m=+4967.619283807" Nov 22 08:33:25 crc kubenswrapper[4858]: I1122 08:33:25.796540 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8p2l4" podStartSLOduration=2.400482646 podStartE2EDuration="4.796520519s" podCreationTimestamp="2025-11-22 08:33:21 +0000 UTC" firstStartedPulling="2025-11-22 08:33:22.711070706 +0000 UTC m=+4964.552493712" lastFinishedPulling="2025-11-22 08:33:25.107108579 +0000 UTC m=+4966.948531585" observedRunningTime="2025-11-22 08:33:25.791690914 +0000 UTC m=+4967.633113920" watchObservedRunningTime="2025-11-22 08:33:25.796520519 +0000 UTC m=+4967.637943535" Nov 22 08:33:31 crc kubenswrapper[4858]: I1122 08:33:31.936605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:31 crc kubenswrapper[4858]: I1122 08:33:31.937220 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:31 crc kubenswrapper[4858]: I1122 08:33:31.982871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:32 crc kubenswrapper[4858]: I1122 08:33:32.125144 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:32 crc kubenswrapper[4858]: I1122 08:33:32.125469 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:32 crc kubenswrapper[4858]: I1122 08:33:32.166335 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:32 crc kubenswrapper[4858]: I1122 08:33:32.835449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:32 crc kubenswrapper[4858]: I1122 08:33:32.837406 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:33 crc kubenswrapper[4858]: I1122 08:33:33.612932 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsqbp"] Nov 22 08:33:34 crc kubenswrapper[4858]: I1122 08:33:34.802740 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qsqbp" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="registry-server" containerID="cri-o://6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db" gracePeriod=2 Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.016635 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8p2l4"] Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.017152 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8p2l4" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="registry-server" containerID="cri-o://fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9" gracePeriod=2 Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.243559 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.339504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-catalog-content\") pod \"7b3e5c66-5951-48be-90d9-b100a0921c3e\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.339603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-utilities\") pod \"7b3e5c66-5951-48be-90d9-b100a0921c3e\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.339632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9qq5\" (UniqueName: \"kubernetes.io/projected/7b3e5c66-5951-48be-90d9-b100a0921c3e-kube-api-access-r9qq5\") pod \"7b3e5c66-5951-48be-90d9-b100a0921c3e\" (UID: \"7b3e5c66-5951-48be-90d9-b100a0921c3e\") " Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.342645 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-utilities" (OuterVolumeSpecName: "utilities") pod "7b3e5c66-5951-48be-90d9-b100a0921c3e" (UID: "7b3e5c66-5951-48be-90d9-b100a0921c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.345044 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3e5c66-5951-48be-90d9-b100a0921c3e-kube-api-access-r9qq5" (OuterVolumeSpecName: "kube-api-access-r9qq5") pod "7b3e5c66-5951-48be-90d9-b100a0921c3e" (UID: "7b3e5c66-5951-48be-90d9-b100a0921c3e"). InnerVolumeSpecName "kube-api-access-r9qq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.361973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b3e5c66-5951-48be-90d9-b100a0921c3e" (UID: "7b3e5c66-5951-48be-90d9-b100a0921c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.379387 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.441566 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.441607 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3e5c66-5951-48be-90d9-b100a0921c3e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.441621 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9qq5\" (UniqueName: \"kubernetes.io/projected/7b3e5c66-5951-48be-90d9-b100a0921c3e-kube-api-access-r9qq5\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.542216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-catalog-content\") pod \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.542387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf4lx\" (UniqueName: \"kubernetes.io/projected/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-kube-api-access-gf4lx\") pod \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.542445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-utilities\") pod \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\" (UID: \"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85\") " Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.543646 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-utilities" (OuterVolumeSpecName: "utilities") pod "06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" (UID: "06e5860b-90fe-48fb-b3a7-ffc98ca1ca85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.544993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-kube-api-access-gf4lx" (OuterVolumeSpecName: "kube-api-access-gf4lx") pod "06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" (UID: "06e5860b-90fe-48fb-b3a7-ffc98ca1ca85"). InnerVolumeSpecName "kube-api-access-gf4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.593753 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" (UID: "06e5860b-90fe-48fb-b3a7-ffc98ca1ca85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.644881 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf4lx\" (UniqueName: \"kubernetes.io/projected/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-kube-api-access-gf4lx\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.644964 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.644991 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.812842 4858 generic.go:334] "Generic (PLEG): container finished" podID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerID="fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9" exitCode=0 Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.812944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2l4" event={"ID":"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85","Type":"ContainerDied","Data":"fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9"} Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.812978 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8p2l4" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.813014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8p2l4" event={"ID":"06e5860b-90fe-48fb-b3a7-ffc98ca1ca85","Type":"ContainerDied","Data":"5fc2be450172640d7154962637d1ebfa2713f1c054ce2c725585e2eeae304d13"} Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.813040 4858 scope.go:117] "RemoveContainer" containerID="fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.816387 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerID="6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db" exitCode=0 Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.816488 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsqbp" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.816484 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsqbp" event={"ID":"7b3e5c66-5951-48be-90d9-b100a0921c3e","Type":"ContainerDied","Data":"6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db"} Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.816828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsqbp" event={"ID":"7b3e5c66-5951-48be-90d9-b100a0921c3e","Type":"ContainerDied","Data":"fefd7ad49fafc388cef816db498c5476bf3c6745372cf704fb6c73d05dad6f8f"} Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.841019 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsqbp"] Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.841348 4858 scope.go:117] "RemoveContainer" containerID="36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.853104 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsqbp"] Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.858203 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8p2l4"] Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.862398 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8p2l4"] Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.863266 4858 scope.go:117] "RemoveContainer" containerID="3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.884193 4858 scope.go:117] "RemoveContainer" containerID="fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9" Nov 22 08:33:35 crc kubenswrapper[4858]: E1122 08:33:35.884663 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9\": container with ID starting with fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9 not found: ID does not exist" containerID="fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.884697 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9"} err="failed to get container status \"fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9\": rpc error: code = NotFound desc = could not find container \"fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9\": container with ID starting with fb1c9f165fcd401bdf55d890f0b719b6f3fc0d3fbfe5c53826690e3c8331c6c9 not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.884720 4858 scope.go:117] "RemoveContainer" containerID="36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3" Nov 22 08:33:35 crc kubenswrapper[4858]: E1122 08:33:35.884951 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3\": container with ID starting with 36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3 not found: ID does not exist" containerID="36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.884974 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3"} err="failed to get container status \"36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3\": rpc error: code = NotFound desc = could not find container \"36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3\": container with ID starting with 36a7b2a0ea65e21d56b7c443eaf607f7bd40a4bba3cc0be9facea95b0bc4dbe3 not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.884989 4858 scope.go:117] "RemoveContainer" containerID="3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d" Nov 22 08:33:35 crc kubenswrapper[4858]: E1122 08:33:35.885177 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d\": container with ID starting with 3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d not found: ID does not exist" containerID="3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.885196 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d"} err="failed to get container status \"3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d\": rpc error: code = NotFound desc = could not find container \"3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d\": container with ID starting with 3043e28013169277b4f716275317e142b2437b93522cd0b4eeac89f19b497d3d not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.885208 4858 scope.go:117] "RemoveContainer" containerID="6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.900336 4858 scope.go:117] "RemoveContainer" containerID="475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.914348 4858 scope.go:117] "RemoveContainer" containerID="2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.955477 4858 scope.go:117] "RemoveContainer" containerID="6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db" Nov 22 08:33:35 crc kubenswrapper[4858]: E1122 08:33:35.956126 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db\": container with ID starting with 6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db not found: ID does not exist" containerID="6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.956157 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db"} err="failed to get container status \"6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db\": rpc error: code = NotFound desc = could not find container \"6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db\": container with ID starting with 6af24cac8c951ef97ad045b5cc855ca96c92c0c30ccf99d6091644fae82379db not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.956180 4858 scope.go:117] "RemoveContainer" containerID="475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25" Nov 22 08:33:35 crc kubenswrapper[4858]: E1122 08:33:35.956471 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25\": container with ID starting with 475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25 not found: ID does not exist" containerID="475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.956498 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25"} err="failed to get container status \"475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25\": rpc error: code = NotFound desc = could not find container \"475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25\": container with ID starting with 475513834eea46f724496dba17d8a73c3d928e366d7bb2bdc4f859a6b38a4b25 not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.956521 4858 scope.go:117] "RemoveContainer" containerID="2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71" Nov 22 08:33:35 crc kubenswrapper[4858]: E1122 08:33:35.956812 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71\": container with ID starting with 2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71 not found: ID does not exist" containerID="2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71" Nov 22 08:33:35 crc kubenswrapper[4858]: I1122 08:33:35.956834 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71"} err="failed to get container status \"2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71\": rpc error: code = NotFound desc = could not find container \"2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71\": container with ID starting with 2799c8d1c439c52c5c295aec61cae8df65c32dac8aacc21685d8c9f68ef78e71 not found: ID does not exist" Nov 22 08:33:37 crc kubenswrapper[4858]: I1122 08:33:37.544757 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" path="/var/lib/kubelet/pods/06e5860b-90fe-48fb-b3a7-ffc98ca1ca85/volumes" Nov 22 08:33:37 crc kubenswrapper[4858]: I1122 08:33:37.546011 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" path="/var/lib/kubelet/pods/7b3e5c66-5951-48be-90d9-b100a0921c3e/volumes" Nov 22 08:35:15 crc kubenswrapper[4858]: I1122 08:35:15.312055 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:35:15 crc kubenswrapper[4858]: I1122 08:35:15.312953 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:35:45 crc kubenswrapper[4858]: I1122 08:35:45.312310 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:35:45 crc kubenswrapper[4858]: I1122 08:35:45.312819 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:36:15 crc kubenswrapper[4858]: I1122 08:36:15.312440 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:36:15 crc kubenswrapper[4858]: I1122 08:36:15.312995 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:36:15 crc kubenswrapper[4858]: I1122 08:36:15.313053 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:36:15 crc kubenswrapper[4858]: I1122 08:36:15.313747 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:36:15 crc kubenswrapper[4858]: I1122 08:36:15.313801 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" gracePeriod=600 Nov 22 08:36:15 crc kubenswrapper[4858]: E1122 08:36:15.944863 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:36:16 crc kubenswrapper[4858]: I1122 08:36:16.033115 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" exitCode=0 Nov 22 08:36:16 crc kubenswrapper[4858]: I1122 08:36:16.033268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148"} Nov 22 08:36:16 crc kubenswrapper[4858]: I1122 08:36:16.033515 4858 scope.go:117] "RemoveContainer" containerID="a2cceb6e469d2b3bed09fa4ebb4d9670f804c3be0e924f74415dd1c1b606909c" Nov 22 08:36:16 crc kubenswrapper[4858]: I1122 08:36:16.034218 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:36:16 crc kubenswrapper[4858]: E1122 08:36:16.034526 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:36:28 crc kubenswrapper[4858]: I1122 08:36:28.535726 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:36:28 crc kubenswrapper[4858]: E1122 08:36:28.536225 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:36:43 crc kubenswrapper[4858]: I1122 08:36:43.535944 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:36:43 crc kubenswrapper[4858]: E1122 08:36:43.536792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:36:55 crc kubenswrapper[4858]: I1122 08:36:55.535804 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:36:55 crc kubenswrapper[4858]: E1122 08:36:55.536578 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:37:06 crc kubenswrapper[4858]: I1122 08:37:06.535398 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:37:06 crc kubenswrapper[4858]: E1122 08:37:06.536176 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.539725 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.540554 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.742910 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nphx5"] Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.743560 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="extract-content" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743581 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="extract-content" Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.743602 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="extract-utilities" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743611 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="extract-utilities" Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.743638 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="registry-server" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743648 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="registry-server" Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.743667 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="extract-content" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743675 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="extract-content" Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.743686 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="extract-utilities" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743694 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="extract-utilities" Nov 22 08:37:19 crc kubenswrapper[4858]: E1122 08:37:19.743707 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="registry-server" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743715 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="registry-server" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e5860b-90fe-48fb-b3a7-ffc98ca1ca85" containerName="registry-server" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.743914 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3e5c66-5951-48be-90d9-b100a0921c3e" containerName="registry-server" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.745204 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.769513 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nphx5"] Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.806503 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-catalog-content\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.806657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-kube-api-access-297cs\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.806706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-utilities\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.908909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-kube-api-access-297cs\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.908998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-utilities\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.909029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-catalog-content\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.909601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-catalog-content\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.910156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-utilities\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:19 crc kubenswrapper[4858]: I1122 08:37:19.929041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-kube-api-access-297cs\") pod \"redhat-operators-nphx5\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:20 crc kubenswrapper[4858]: I1122 08:37:20.066958 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:22 crc kubenswrapper[4858]: I1122 08:37:20.509865 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nphx5"] Nov 22 08:37:22 crc kubenswrapper[4858]: I1122 08:37:21.498170 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerID="756b945d5954ffdd1921ad2ea0f69cc44509658727f3a46529cac5b41bbeb583" exitCode=0 Nov 22 08:37:22 crc kubenswrapper[4858]: I1122 08:37:21.498258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerDied","Data":"756b945d5954ffdd1921ad2ea0f69cc44509658727f3a46529cac5b41bbeb583"} Nov 22 08:37:22 crc kubenswrapper[4858]: I1122 08:37:21.498523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerStarted","Data":"181316fbbed5140589546407c3d036d6012d13c8f3a6b9a3a052b584d43d8f43"} Nov 22 08:37:22 crc kubenswrapper[4858]: I1122 08:37:21.500089 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:37:23 crc kubenswrapper[4858]: I1122 08:37:23.514840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerStarted","Data":"ffde5cc15a686848bcbfea85a8e2da9aba28dfe3e75ddbbb94c338919d5cb0ff"} Nov 22 08:37:24 crc kubenswrapper[4858]: I1122 08:37:24.539305 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerID="ffde5cc15a686848bcbfea85a8e2da9aba28dfe3e75ddbbb94c338919d5cb0ff" exitCode=0 Nov 22 08:37:24 crc kubenswrapper[4858]: I1122 08:37:24.539407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerDied","Data":"ffde5cc15a686848bcbfea85a8e2da9aba28dfe3e75ddbbb94c338919d5cb0ff"} Nov 22 08:37:25 crc kubenswrapper[4858]: I1122 08:37:25.550636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerStarted","Data":"411403ed76d64d29c3379e440f63eeddcd775b7c0e51259acb5dfcbe753163cc"} Nov 22 08:37:25 crc kubenswrapper[4858]: I1122 08:37:25.575115 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nphx5" podStartSLOduration=3.111282465 podStartE2EDuration="6.575088325s" podCreationTimestamp="2025-11-22 08:37:19 +0000 UTC" firstStartedPulling="2025-11-22 08:37:21.499805758 +0000 UTC m=+5203.341228764" lastFinishedPulling="2025-11-22 08:37:24.963611628 +0000 UTC m=+5206.805034624" observedRunningTime="2025-11-22 08:37:25.573981169 +0000 UTC m=+5207.415404175" watchObservedRunningTime="2025-11-22 08:37:25.575088325 +0000 UTC m=+5207.416511331" Nov 22 08:37:30 crc kubenswrapper[4858]: I1122 08:37:30.067716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:30 crc kubenswrapper[4858]: I1122 08:37:30.068337 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:31 crc kubenswrapper[4858]: I1122 08:37:31.112593 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nphx5" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="registry-server" probeResult="failure" output=< Nov 22 08:37:31 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 08:37:31 crc kubenswrapper[4858]: > Nov 22 08:37:34 crc kubenswrapper[4858]: I1122 08:37:34.535411 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:37:34 crc kubenswrapper[4858]: E1122 08:37:34.535988 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:37:40 crc kubenswrapper[4858]: I1122 08:37:40.114005 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:40 crc kubenswrapper[4858]: I1122 08:37:40.156673 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:40 crc kubenswrapper[4858]: I1122 08:37:40.358883 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nphx5"] Nov 22 08:37:41 crc kubenswrapper[4858]: I1122 08:37:41.664483 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nphx5" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="registry-server" containerID="cri-o://411403ed76d64d29c3379e440f63eeddcd775b7c0e51259acb5dfcbe753163cc" gracePeriod=2 Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.677711 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerID="411403ed76d64d29c3379e440f63eeddcd775b7c0e51259acb5dfcbe753163cc" exitCode=0 Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.677765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerDied","Data":"411403ed76d64d29c3379e440f63eeddcd775b7c0e51259acb5dfcbe753163cc"} Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.678178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nphx5" event={"ID":"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd","Type":"ContainerDied","Data":"181316fbbed5140589546407c3d036d6012d13c8f3a6b9a3a052b584d43d8f43"} Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.678197 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181316fbbed5140589546407c3d036d6012d13c8f3a6b9a3a052b584d43d8f43" Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.719556 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.855722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-utilities\") pod \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.855821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-catalog-content\") pod \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.855905 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-kube-api-access-297cs\") pod \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\" (UID: \"0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd\") " Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.857154 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-utilities" (OuterVolumeSpecName: "utilities") pod "0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" (UID: "0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.861470 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-kube-api-access-297cs" (OuterVolumeSpecName: "kube-api-access-297cs") pod "0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" (UID: "0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd"). InnerVolumeSpecName "kube-api-access-297cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.957431 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-kube-api-access-297cs\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.957744 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:42 crc kubenswrapper[4858]: I1122 08:37:42.958191 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" (UID: "0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:37:43 crc kubenswrapper[4858]: I1122 08:37:43.059018 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:43 crc kubenswrapper[4858]: I1122 08:37:43.684690 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nphx5" Nov 22 08:37:43 crc kubenswrapper[4858]: I1122 08:37:43.710823 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nphx5"] Nov 22 08:37:43 crc kubenswrapper[4858]: I1122 08:37:43.715782 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nphx5"] Nov 22 08:37:45 crc kubenswrapper[4858]: I1122 08:37:45.546072 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" path="/var/lib/kubelet/pods/0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd/volumes" Nov 22 08:37:48 crc kubenswrapper[4858]: I1122 08:37:48.536111 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:37:48 crc kubenswrapper[4858]: E1122 08:37:48.536377 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:38:01 crc kubenswrapper[4858]: I1122 08:38:01.536291 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:38:01 crc kubenswrapper[4858]: E1122 08:38:01.537187 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:38:14 crc kubenswrapper[4858]: I1122 08:38:14.536114 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:38:14 crc kubenswrapper[4858]: E1122 08:38:14.537063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:38:28 crc kubenswrapper[4858]: I1122 08:38:28.535290 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:38:28 crc kubenswrapper[4858]: E1122 08:38:28.536053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:38:43 crc kubenswrapper[4858]: I1122 08:38:43.536355 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:38:43 crc kubenswrapper[4858]: E1122 08:38:43.536975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:38:56 crc kubenswrapper[4858]: I1122 08:38:56.536061 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:38:56 crc kubenswrapper[4858]: E1122 08:38:56.536973 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:39:10 crc kubenswrapper[4858]: I1122 08:39:10.536152 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:39:10 crc kubenswrapper[4858]: E1122 08:39:10.537469 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:39:22 crc kubenswrapper[4858]: I1122 08:39:22.536473 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:39:22 crc kubenswrapper[4858]: E1122 08:39:22.537267 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:39:37 crc kubenswrapper[4858]: I1122 08:39:37.535892 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:39:37 crc kubenswrapper[4858]: E1122 08:39:37.536700 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:39:48 crc kubenswrapper[4858]: I1122 08:39:48.535982 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:39:48 crc kubenswrapper[4858]: E1122 08:39:48.537156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:40:03 crc kubenswrapper[4858]: I1122 08:40:03.536018 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:40:03 crc kubenswrapper[4858]: E1122 08:40:03.536766 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:40:14 crc kubenswrapper[4858]: I1122 08:40:14.536403 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:40:14 crc kubenswrapper[4858]: E1122 08:40:14.537414 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:40:25 crc kubenswrapper[4858]: I1122 08:40:25.535486 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:40:25 crc kubenswrapper[4858]: E1122 08:40:25.536256 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:40:37 crc kubenswrapper[4858]: I1122 08:40:37.537430 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:40:37 crc kubenswrapper[4858]: E1122 08:40:37.538026 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:40:49 crc kubenswrapper[4858]: I1122 08:40:49.541220 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:40:49 crc kubenswrapper[4858]: E1122 08:40:49.543302 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:41:04 crc kubenswrapper[4858]: I1122 08:41:04.536163 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:41:04 crc kubenswrapper[4858]: E1122 08:41:04.537978 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:41:15 crc kubenswrapper[4858]: I1122 08:41:15.536606 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:41:16 crc kubenswrapper[4858]: I1122 08:41:16.284393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"c73d81d31b02a873c292f43dd4615dce7a42f77e1385a8b79c1bac25b88f895b"} Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.705656 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86rx9"] Nov 22 08:42:05 crc kubenswrapper[4858]: E1122 08:42:05.706543 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="extract-content" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.706562 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="extract-content" Nov 22 08:42:05 crc kubenswrapper[4858]: E1122 08:42:05.706582 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="registry-server" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.706588 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="registry-server" Nov 22 08:42:05 crc kubenswrapper[4858]: E1122 08:42:05.706606 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="extract-utilities" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.706614 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="extract-utilities" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.706777 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bad6f4b-51e0-4cba-a9ca-5efd9c3558cd" containerName="registry-server" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.707879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.715921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ldf\" (UniqueName: \"kubernetes.io/projected/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-kube-api-access-22ldf\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.716496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-catalog-content\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.716646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-utilities\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.722154 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86rx9"] Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.817678 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-catalog-content\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.817753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-utilities\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.817796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ldf\" (UniqueName: \"kubernetes.io/projected/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-kube-api-access-22ldf\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.818920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-catalog-content\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.819068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-utilities\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:05 crc kubenswrapper[4858]: I1122 08:42:05.839436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ldf\" (UniqueName: \"kubernetes.io/projected/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-kube-api-access-22ldf\") pod \"certified-operators-86rx9\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:06 crc kubenswrapper[4858]: I1122 08:42:06.028509 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:06 crc kubenswrapper[4858]: I1122 08:42:06.594587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86rx9"] Nov 22 08:42:06 crc kubenswrapper[4858]: I1122 08:42:06.654577 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rx9" event={"ID":"43a2a97d-d918-4e7f-9676-ee9c3137cc1e","Type":"ContainerStarted","Data":"dbccbfbb48c23b8d136aea8f814e6d4af905559a67da08506005a998b9e21780"} Nov 22 08:42:07 crc kubenswrapper[4858]: I1122 08:42:07.669702 4858 generic.go:334] "Generic (PLEG): container finished" podID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerID="535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b" exitCode=0 Nov 22 08:42:07 crc kubenswrapper[4858]: I1122 08:42:07.669760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rx9" event={"ID":"43a2a97d-d918-4e7f-9676-ee9c3137cc1e","Type":"ContainerDied","Data":"535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b"} Nov 22 08:42:09 crc kubenswrapper[4858]: I1122 08:42:09.690185 4858 generic.go:334] "Generic (PLEG): container finished" podID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerID="30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33" exitCode=0 Nov 22 08:42:09 crc kubenswrapper[4858]: I1122 08:42:09.690256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rx9" event={"ID":"43a2a97d-d918-4e7f-9676-ee9c3137cc1e","Type":"ContainerDied","Data":"30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33"} Nov 22 08:42:10 crc kubenswrapper[4858]: I1122 08:42:10.699471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rx9" event={"ID":"43a2a97d-d918-4e7f-9676-ee9c3137cc1e","Type":"ContainerStarted","Data":"feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f"} Nov 22 08:42:10 crc kubenswrapper[4858]: I1122 08:42:10.722651 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86rx9" podStartSLOduration=3.29661871 podStartE2EDuration="5.722622422s" podCreationTimestamp="2025-11-22 08:42:05 +0000 UTC" firstStartedPulling="2025-11-22 08:42:07.672765916 +0000 UTC m=+5489.514188932" lastFinishedPulling="2025-11-22 08:42:10.098769638 +0000 UTC m=+5491.940192644" observedRunningTime="2025-11-22 08:42:10.718866552 +0000 UTC m=+5492.560289558" watchObservedRunningTime="2025-11-22 08:42:10.722622422 +0000 UTC m=+5492.564045458" Nov 22 08:42:16 crc kubenswrapper[4858]: I1122 08:42:16.029686 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:16 crc kubenswrapper[4858]: I1122 08:42:16.030011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:16 crc kubenswrapper[4858]: I1122 08:42:16.069894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:16 crc kubenswrapper[4858]: I1122 08:42:16.789776 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:16 crc kubenswrapper[4858]: I1122 08:42:16.837656 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86rx9"] Nov 22 08:42:18 crc kubenswrapper[4858]: I1122 08:42:18.754305 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86rx9" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="registry-server" containerID="cri-o://feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f" gracePeriod=2 Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.120107 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.218787 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-utilities\") pod \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.218877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ldf\" (UniqueName: \"kubernetes.io/projected/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-kube-api-access-22ldf\") pod \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.218978 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-catalog-content\") pod \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\" (UID: \"43a2a97d-d918-4e7f-9676-ee9c3137cc1e\") " Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.219867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-utilities" (OuterVolumeSpecName: "utilities") pod "43a2a97d-d918-4e7f-9676-ee9c3137cc1e" (UID: "43a2a97d-d918-4e7f-9676-ee9c3137cc1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.224629 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-kube-api-access-22ldf" (OuterVolumeSpecName: "kube-api-access-22ldf") pod "43a2a97d-d918-4e7f-9676-ee9c3137cc1e" (UID: "43a2a97d-d918-4e7f-9676-ee9c3137cc1e"). InnerVolumeSpecName "kube-api-access-22ldf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.320908 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.320954 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ldf\" (UniqueName: \"kubernetes.io/projected/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-kube-api-access-22ldf\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.406102 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43a2a97d-d918-4e7f-9676-ee9c3137cc1e" (UID: "43a2a97d-d918-4e7f-9676-ee9c3137cc1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.422009 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43a2a97d-d918-4e7f-9676-ee9c3137cc1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.764817 4858 generic.go:334] "Generic (PLEG): container finished" podID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerID="feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f" exitCode=0 Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.764888 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86rx9" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.764918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rx9" event={"ID":"43a2a97d-d918-4e7f-9676-ee9c3137cc1e","Type":"ContainerDied","Data":"feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f"} Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.765254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86rx9" event={"ID":"43a2a97d-d918-4e7f-9676-ee9c3137cc1e","Type":"ContainerDied","Data":"dbccbfbb48c23b8d136aea8f814e6d4af905559a67da08506005a998b9e21780"} Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.765293 4858 scope.go:117] "RemoveContainer" containerID="feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.789381 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86rx9"] Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.796546 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86rx9"] Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.796825 4858 scope.go:117] "RemoveContainer" containerID="30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.819636 4858 scope.go:117] "RemoveContainer" containerID="535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.847483 4858 scope.go:117] "RemoveContainer" containerID="feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f" Nov 22 08:42:19 crc kubenswrapper[4858]: E1122 08:42:19.847984 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f\": container with ID starting with feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f not found: ID does not exist" containerID="feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.848045 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f"} err="failed to get container status \"feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f\": rpc error: code = NotFound desc = could not find container \"feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f\": container with ID starting with feacc8195b48214f1b77fe1ce0ac7d80bac515d5a7afe9f598e41b915cb8e09f not found: ID does not exist" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.848079 4858 scope.go:117] "RemoveContainer" containerID="30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33" Nov 22 08:42:19 crc kubenswrapper[4858]: E1122 08:42:19.848601 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33\": container with ID starting with 30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33 not found: ID does not exist" containerID="30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.848630 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33"} err="failed to get container status \"30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33\": rpc error: code = NotFound desc = could not find container \"30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33\": container with ID starting with 30fedd810fdc43f8fd00bd6a76d107fd5d998ad73b87da0eaeb43380e68ace33 not found: ID does not exist" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.848645 4858 scope.go:117] "RemoveContainer" containerID="535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b" Nov 22 08:42:19 crc kubenswrapper[4858]: E1122 08:42:19.849003 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b\": container with ID starting with 535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b not found: ID does not exist" containerID="535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b" Nov 22 08:42:19 crc kubenswrapper[4858]: I1122 08:42:19.849061 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b"} err="failed to get container status \"535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b\": rpc error: code = NotFound desc = could not find container \"535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b\": container with ID starting with 535472561f18493a0dfcdf8f162d192d2a8563e8e12b8051f448287e57a0721b not found: ID does not exist" Nov 22 08:42:21 crc kubenswrapper[4858]: I1122 08:42:21.548885 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" path="/var/lib/kubelet/pods/43a2a97d-d918-4e7f-9676-ee9c3137cc1e/volumes" Nov 22 08:43:15 crc kubenswrapper[4858]: I1122 08:43:15.312313 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:43:15 crc kubenswrapper[4858]: I1122 08:43:15.312974 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:43:32 crc kubenswrapper[4858]: I1122 08:43:32.714556 4858 scope.go:117] "RemoveContainer" containerID="756b945d5954ffdd1921ad2ea0f69cc44509658727f3a46529cac5b41bbeb583" Nov 22 08:43:32 crc kubenswrapper[4858]: I1122 08:43:32.739170 4858 scope.go:117] "RemoveContainer" containerID="411403ed76d64d29c3379e440f63eeddcd775b7c0e51259acb5dfcbe753163cc" Nov 22 08:43:32 crc kubenswrapper[4858]: I1122 08:43:32.784848 4858 scope.go:117] "RemoveContainer" containerID="ffde5cc15a686848bcbfea85a8e2da9aba28dfe3e75ddbbb94c338919d5cb0ff" Nov 22 08:43:45 crc kubenswrapper[4858]: I1122 08:43:45.312193 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:43:45 crc kubenswrapper[4858]: I1122 08:43:45.313291 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.303404 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kcncr"] Nov 22 08:43:51 crc kubenswrapper[4858]: E1122 08:43:51.303998 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="extract-content" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.304012 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="extract-content" Nov 22 08:43:51 crc kubenswrapper[4858]: E1122 08:43:51.304023 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="extract-utilities" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.304031 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="extract-utilities" Nov 22 08:43:51 crc kubenswrapper[4858]: E1122 08:43:51.304052 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="registry-server" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.304059 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="registry-server" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.304213 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a2a97d-d918-4e7f-9676-ee9c3137cc1e" containerName="registry-server" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.305686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.320826 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcncr"] Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.399531 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdll\" (UniqueName: \"kubernetes.io/projected/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-kube-api-access-kjdll\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.399968 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-utilities\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.400080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-catalog-content\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.502800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-utilities\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.502877 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-catalog-content\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.502984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdll\" (UniqueName: \"kubernetes.io/projected/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-kube-api-access-kjdll\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.503219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-utilities\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.503275 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-catalog-content\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.526026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdll\" (UniqueName: \"kubernetes.io/projected/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-kube-api-access-kjdll\") pod \"community-operators-kcncr\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:51 crc kubenswrapper[4858]: I1122 08:43:51.633178 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:43:52 crc kubenswrapper[4858]: I1122 08:43:52.187615 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcncr"] Nov 22 08:43:52 crc kubenswrapper[4858]: I1122 08:43:52.501094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcncr" event={"ID":"9af6742e-32c6-4228-a1e9-20cc7f7c0fee","Type":"ContainerStarted","Data":"22aee43f95b588effa910c219f7d738c7b3310a170bbc8221aaaa99b49966219"} Nov 22 08:43:53 crc kubenswrapper[4858]: I1122 08:43:53.511915 4858 generic.go:334] "Generic (PLEG): container finished" podID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerID="fe6ffd3a785e7d3c06354e956cdc81dc8bea14f1a2fac20edf8d561796f73b08" exitCode=0 Nov 22 08:43:53 crc kubenswrapper[4858]: I1122 08:43:53.512186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcncr" event={"ID":"9af6742e-32c6-4228-a1e9-20cc7f7c0fee","Type":"ContainerDied","Data":"fe6ffd3a785e7d3c06354e956cdc81dc8bea14f1a2fac20edf8d561796f73b08"} Nov 22 08:43:53 crc kubenswrapper[4858]: I1122 08:43:53.515501 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:43:58 crc kubenswrapper[4858]: I1122 08:43:58.555111 4858 generic.go:334] "Generic (PLEG): container finished" podID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerID="2c4f6d7dcce6b56b9750426b407d8a49516e118d896195b2197379056823fe1a" exitCode=0 Nov 22 08:43:58 crc kubenswrapper[4858]: I1122 08:43:58.555202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcncr" event={"ID":"9af6742e-32c6-4228-a1e9-20cc7f7c0fee","Type":"ContainerDied","Data":"2c4f6d7dcce6b56b9750426b407d8a49516e118d896195b2197379056823fe1a"} Nov 22 08:44:01 crc kubenswrapper[4858]: I1122 08:44:01.587040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcncr" event={"ID":"9af6742e-32c6-4228-a1e9-20cc7f7c0fee","Type":"ContainerStarted","Data":"a8f8ee68bb3c4bc242e38c0809e6f4d5b897e53bd64cf33562f18a28982d8d49"} Nov 22 08:44:01 crc kubenswrapper[4858]: I1122 08:44:01.616645 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kcncr" podStartSLOduration=3.744609798 podStartE2EDuration="10.616617514s" podCreationTimestamp="2025-11-22 08:43:51 +0000 UTC" firstStartedPulling="2025-11-22 08:43:53.515148763 +0000 UTC m=+5595.356571769" lastFinishedPulling="2025-11-22 08:44:00.387156479 +0000 UTC m=+5602.228579485" observedRunningTime="2025-11-22 08:44:01.606237853 +0000 UTC m=+5603.447660889" watchObservedRunningTime="2025-11-22 08:44:01.616617514 +0000 UTC m=+5603.458040530" Nov 22 08:44:01 crc kubenswrapper[4858]: I1122 08:44:01.634581 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:44:01 crc kubenswrapper[4858]: I1122 08:44:01.634647 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:44:02 crc kubenswrapper[4858]: I1122 08:44:02.689544 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kcncr" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:02 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:02 crc kubenswrapper[4858]: > Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.288544 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxl6r"] Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.291079 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.302583 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxl6r"] Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.412686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-utilities\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.412751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6zz\" (UniqueName: \"kubernetes.io/projected/0545c023-0c6b-4725-adc5-d84259b659e8-kube-api-access-pg6zz\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.412778 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-catalog-content\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.514781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-utilities\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.514890 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg6zz\" (UniqueName: \"kubernetes.io/projected/0545c023-0c6b-4725-adc5-d84259b659e8-kube-api-access-pg6zz\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.514922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-catalog-content\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.515358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-utilities\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.515557 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-catalog-content\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.534792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg6zz\" (UniqueName: \"kubernetes.io/projected/0545c023-0c6b-4725-adc5-d84259b659e8-kube-api-access-pg6zz\") pod \"redhat-marketplace-vxl6r\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:10 crc kubenswrapper[4858]: I1122 08:44:10.614836 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:11 crc kubenswrapper[4858]: I1122 08:44:11.066742 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxl6r"] Nov 22 08:44:11 crc kubenswrapper[4858]: W1122 08:44:11.071802 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0545c023_0c6b_4725_adc5_d84259b659e8.slice/crio-75be6afe147d7ebda8f816164e7621c6980a31d679e2fdf56e7d9280a3f8dfbc WatchSource:0}: Error finding container 75be6afe147d7ebda8f816164e7621c6980a31d679e2fdf56e7d9280a3f8dfbc: Status 404 returned error can't find the container with id 75be6afe147d7ebda8f816164e7621c6980a31d679e2fdf56e7d9280a3f8dfbc Nov 22 08:44:11 crc kubenswrapper[4858]: I1122 08:44:11.674109 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:44:11 crc kubenswrapper[4858]: I1122 08:44:11.675800 4858 generic.go:334] "Generic (PLEG): container finished" podID="0545c023-0c6b-4725-adc5-d84259b659e8" containerID="6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d" exitCode=0 Nov 22 08:44:11 crc kubenswrapper[4858]: I1122 08:44:11.675836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxl6r" event={"ID":"0545c023-0c6b-4725-adc5-d84259b659e8","Type":"ContainerDied","Data":"6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d"} Nov 22 08:44:11 crc kubenswrapper[4858]: I1122 08:44:11.675858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxl6r" event={"ID":"0545c023-0c6b-4725-adc5-d84259b659e8","Type":"ContainerStarted","Data":"75be6afe147d7ebda8f816164e7621c6980a31d679e2fdf56e7d9280a3f8dfbc"} Nov 22 08:44:11 crc kubenswrapper[4858]: I1122 08:44:11.725784 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:44:14 crc kubenswrapper[4858]: I1122 08:44:14.065912 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcncr"] Nov 22 08:44:14 crc kubenswrapper[4858]: I1122 08:44:14.067017 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kcncr" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="registry-server" containerID="cri-o://a8f8ee68bb3c4bc242e38c0809e6f4d5b897e53bd64cf33562f18a28982d8d49" gracePeriod=2 Nov 22 08:44:14 crc kubenswrapper[4858]: I1122 08:44:14.700428 4858 generic.go:334] "Generic (PLEG): container finished" podID="0545c023-0c6b-4725-adc5-d84259b659e8" containerID="d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8" exitCode=0 Nov 22 08:44:14 crc kubenswrapper[4858]: I1122 08:44:14.700501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxl6r" event={"ID":"0545c023-0c6b-4725-adc5-d84259b659e8","Type":"ContainerDied","Data":"d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8"} Nov 22 08:44:14 crc kubenswrapper[4858]: I1122 08:44:14.704198 4858 generic.go:334] "Generic (PLEG): container finished" podID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerID="a8f8ee68bb3c4bc242e38c0809e6f4d5b897e53bd64cf33562f18a28982d8d49" exitCode=0 Nov 22 08:44:14 crc kubenswrapper[4858]: I1122 08:44:14.704283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcncr" event={"ID":"9af6742e-32c6-4228-a1e9-20cc7f7c0fee","Type":"ContainerDied","Data":"a8f8ee68bb3c4bc242e38c0809e6f4d5b897e53bd64cf33562f18a28982d8d49"} Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.117745 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.288853 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-catalog-content\") pod \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.289026 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-utilities\") pod \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.289190 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjdll\" (UniqueName: \"kubernetes.io/projected/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-kube-api-access-kjdll\") pod \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\" (UID: \"9af6742e-32c6-4228-a1e9-20cc7f7c0fee\") " Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.291220 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-utilities" (OuterVolumeSpecName: "utilities") pod "9af6742e-32c6-4228-a1e9-20cc7f7c0fee" (UID: "9af6742e-32c6-4228-a1e9-20cc7f7c0fee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.298745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-kube-api-access-kjdll" (OuterVolumeSpecName: "kube-api-access-kjdll") pod "9af6742e-32c6-4228-a1e9-20cc7f7c0fee" (UID: "9af6742e-32c6-4228-a1e9-20cc7f7c0fee"). InnerVolumeSpecName "kube-api-access-kjdll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.312305 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.312395 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.312465 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.313556 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c73d81d31b02a873c292f43dd4615dce7a42f77e1385a8b79c1bac25b88f895b"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.313625 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://c73d81d31b02a873c292f43dd4615dce7a42f77e1385a8b79c1bac25b88f895b" gracePeriod=600 Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.360740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9af6742e-32c6-4228-a1e9-20cc7f7c0fee" (UID: "9af6742e-32c6-4228-a1e9-20cc7f7c0fee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.391384 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjdll\" (UniqueName: \"kubernetes.io/projected/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-kube-api-access-kjdll\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.391431 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.391441 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af6742e-32c6-4228-a1e9-20cc7f7c0fee-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.713950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcncr" event={"ID":"9af6742e-32c6-4228-a1e9-20cc7f7c0fee","Type":"ContainerDied","Data":"22aee43f95b588effa910c219f7d738c7b3310a170bbc8221aaaa99b49966219"} Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.714040 4858 scope.go:117] "RemoveContainer" containerID="a8f8ee68bb3c4bc242e38c0809e6f4d5b897e53bd64cf33562f18a28982d8d49" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.713966 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcncr" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.719547 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="c73d81d31b02a873c292f43dd4615dce7a42f77e1385a8b79c1bac25b88f895b" exitCode=0 Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.719586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"c73d81d31b02a873c292f43dd4615dce7a42f77e1385a8b79c1bac25b88f895b"} Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.736391 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcncr"] Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.742237 4858 scope.go:117] "RemoveContainer" containerID="2c4f6d7dcce6b56b9750426b407d8a49516e118d896195b2197379056823fe1a" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.742995 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kcncr"] Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.764116 4858 scope.go:117] "RemoveContainer" containerID="fe6ffd3a785e7d3c06354e956cdc81dc8bea14f1a2fac20edf8d561796f73b08" Nov 22 08:44:15 crc kubenswrapper[4858]: I1122 08:44:15.786072 4858 scope.go:117] "RemoveContainer" containerID="8d7c8e5085db7df364f7c89e5ccdac7039b5f893026b73b63f346e2f9948e148" Nov 22 08:44:17 crc kubenswrapper[4858]: I1122 08:44:17.552191 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" path="/var/lib/kubelet/pods/9af6742e-32c6-4228-a1e9-20cc7f7c0fee/volumes" Nov 22 08:44:17 crc kubenswrapper[4858]: I1122 08:44:17.747102 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19"} Nov 22 08:44:18 crc kubenswrapper[4858]: I1122 08:44:18.759491 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxl6r" event={"ID":"0545c023-0c6b-4725-adc5-d84259b659e8","Type":"ContainerStarted","Data":"27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2"} Nov 22 08:44:18 crc kubenswrapper[4858]: I1122 08:44:18.780498 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxl6r" podStartSLOduration=2.94711238 podStartE2EDuration="8.780476121s" podCreationTimestamp="2025-11-22 08:44:10 +0000 UTC" firstStartedPulling="2025-11-22 08:44:11.678196157 +0000 UTC m=+5613.519619153" lastFinishedPulling="2025-11-22 08:44:17.511559888 +0000 UTC m=+5619.352982894" observedRunningTime="2025-11-22 08:44:18.776440732 +0000 UTC m=+5620.617863748" watchObservedRunningTime="2025-11-22 08:44:18.780476121 +0000 UTC m=+5620.621899127" Nov 22 08:44:20 crc kubenswrapper[4858]: I1122 08:44:20.615740 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:20 crc kubenswrapper[4858]: I1122 08:44:20.615877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:20 crc kubenswrapper[4858]: I1122 08:44:20.666097 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:30 crc kubenswrapper[4858]: I1122 08:44:30.662198 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:30 crc kubenswrapper[4858]: I1122 08:44:30.727494 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxl6r"] Nov 22 08:44:30 crc kubenswrapper[4858]: I1122 08:44:30.876466 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxl6r" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="registry-server" containerID="cri-o://27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2" gracePeriod=2 Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.328502 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.465104 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg6zz\" (UniqueName: \"kubernetes.io/projected/0545c023-0c6b-4725-adc5-d84259b659e8-kube-api-access-pg6zz\") pod \"0545c023-0c6b-4725-adc5-d84259b659e8\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.465172 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-utilities\") pod \"0545c023-0c6b-4725-adc5-d84259b659e8\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.465236 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-catalog-content\") pod \"0545c023-0c6b-4725-adc5-d84259b659e8\" (UID: \"0545c023-0c6b-4725-adc5-d84259b659e8\") " Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.466939 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-utilities" (OuterVolumeSpecName: "utilities") pod "0545c023-0c6b-4725-adc5-d84259b659e8" (UID: "0545c023-0c6b-4725-adc5-d84259b659e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.475105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0545c023-0c6b-4725-adc5-d84259b659e8-kube-api-access-pg6zz" (OuterVolumeSpecName: "kube-api-access-pg6zz") pod "0545c023-0c6b-4725-adc5-d84259b659e8" (UID: "0545c023-0c6b-4725-adc5-d84259b659e8"). InnerVolumeSpecName "kube-api-access-pg6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.488279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0545c023-0c6b-4725-adc5-d84259b659e8" (UID: "0545c023-0c6b-4725-adc5-d84259b659e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.567962 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg6zz\" (UniqueName: \"kubernetes.io/projected/0545c023-0c6b-4725-adc5-d84259b659e8-kube-api-access-pg6zz\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.568008 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.568019 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0545c023-0c6b-4725-adc5-d84259b659e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.889634 4858 generic.go:334] "Generic (PLEG): container finished" podID="0545c023-0c6b-4725-adc5-d84259b659e8" containerID="27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2" exitCode=0 Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.889722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxl6r" event={"ID":"0545c023-0c6b-4725-adc5-d84259b659e8","Type":"ContainerDied","Data":"27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2"} Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.889772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxl6r" event={"ID":"0545c023-0c6b-4725-adc5-d84259b659e8","Type":"ContainerDied","Data":"75be6afe147d7ebda8f816164e7621c6980a31d679e2fdf56e7d9280a3f8dfbc"} Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.889764 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxl6r" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.889918 4858 scope.go:117] "RemoveContainer" containerID="27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.918858 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxl6r"] Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.922076 4858 scope.go:117] "RemoveContainer" containerID="d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.925450 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxl6r"] Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.944773 4858 scope.go:117] "RemoveContainer" containerID="6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.979339 4858 scope.go:117] "RemoveContainer" containerID="27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2" Nov 22 08:44:31 crc kubenswrapper[4858]: E1122 08:44:31.979975 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2\": container with ID starting with 27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2 not found: ID does not exist" containerID="27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.980024 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2"} err="failed to get container status \"27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2\": rpc error: code = NotFound desc = could not find container \"27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2\": container with ID starting with 27d9ee37edfe8016555e34c867ff8992c6e65cb9aec0075468f07726b39972f2 not found: ID does not exist" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.980057 4858 scope.go:117] "RemoveContainer" containerID="d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8" Nov 22 08:44:31 crc kubenswrapper[4858]: E1122 08:44:31.980402 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8\": container with ID starting with d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8 not found: ID does not exist" containerID="d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.980437 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8"} err="failed to get container status \"d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8\": rpc error: code = NotFound desc = could not find container \"d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8\": container with ID starting with d0ec279433e30d7841468a4d6cde42ee90314c74ecf102b6fc19a2153e9fc9f8 not found: ID does not exist" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.980459 4858 scope.go:117] "RemoveContainer" containerID="6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d" Nov 22 08:44:31 crc kubenswrapper[4858]: E1122 08:44:31.980892 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d\": container with ID starting with 6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d not found: ID does not exist" containerID="6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d" Nov 22 08:44:31 crc kubenswrapper[4858]: I1122 08:44:31.980952 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d"} err="failed to get container status \"6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d\": rpc error: code = NotFound desc = could not find container \"6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d\": container with ID starting with 6cddd798b4be0c874f1e1363f40b44e8776dc3ca859a231a57db3c049696b96d not found: ID does not exist" Nov 22 08:44:33 crc kubenswrapper[4858]: I1122 08:44:33.547093 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" path="/var/lib/kubelet/pods/0545c023-0c6b-4725-adc5-d84259b659e8/volumes" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.141121 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm"] Nov 22 08:45:00 crc kubenswrapper[4858]: E1122 08:45:00.141982 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="extract-content" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.141996 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="extract-content" Nov 22 08:45:00 crc kubenswrapper[4858]: E1122 08:45:00.142008 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="registry-server" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142014 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="registry-server" Nov 22 08:45:00 crc kubenswrapper[4858]: E1122 08:45:00.142033 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="extract-utilities" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142040 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="extract-utilities" Nov 22 08:45:00 crc kubenswrapper[4858]: E1122 08:45:00.142049 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="extract-content" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142056 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="extract-content" Nov 22 08:45:00 crc kubenswrapper[4858]: E1122 08:45:00.142069 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="registry-server" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142075 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="registry-server" Nov 22 08:45:00 crc kubenswrapper[4858]: E1122 08:45:00.142089 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="extract-utilities" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142095 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="extract-utilities" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142250 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af6742e-32c6-4228-a1e9-20cc7f7c0fee" containerName="registry-server" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142271 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0545c023-0c6b-4725-adc5-d84259b659e8" containerName="registry-server" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.142857 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.146503 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.146715 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.154650 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm"] Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.235649 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08dd9777-beef-4e69-89b5-19901541212d-secret-volume\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.235743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08dd9777-beef-4e69-89b5-19901541212d-config-volume\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.235798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vclws\" (UniqueName: \"kubernetes.io/projected/08dd9777-beef-4e69-89b5-19901541212d-kube-api-access-vclws\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.337167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08dd9777-beef-4e69-89b5-19901541212d-secret-volume\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.337238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08dd9777-beef-4e69-89b5-19901541212d-config-volume\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.337280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vclws\" (UniqueName: \"kubernetes.io/projected/08dd9777-beef-4e69-89b5-19901541212d-kube-api-access-vclws\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.338597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08dd9777-beef-4e69-89b5-19901541212d-config-volume\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.344026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08dd9777-beef-4e69-89b5-19901541212d-secret-volume\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.354169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vclws\" (UniqueName: \"kubernetes.io/projected/08dd9777-beef-4e69-89b5-19901541212d-kube-api-access-vclws\") pod \"collect-profiles-29396685-kqzcm\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.466527 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:00 crc kubenswrapper[4858]: I1122 08:45:00.888950 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm"] Nov 22 08:45:01 crc kubenswrapper[4858]: I1122 08:45:01.132386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" event={"ID":"08dd9777-beef-4e69-89b5-19901541212d","Type":"ContainerStarted","Data":"0722cf0db56da9248a7efbb8ec59d4b4865f18457c65a54683e213ed63c14f3c"} Nov 22 08:45:02 crc kubenswrapper[4858]: I1122 08:45:02.142221 4858 generic.go:334] "Generic (PLEG): container finished" podID="08dd9777-beef-4e69-89b5-19901541212d" containerID="fd09b5dfdb5a00437659660bf9644fad68aad6b686f05e06d3259d27a3397a6e" exitCode=0 Nov 22 08:45:02 crc kubenswrapper[4858]: I1122 08:45:02.142340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" event={"ID":"08dd9777-beef-4e69-89b5-19901541212d","Type":"ContainerDied","Data":"fd09b5dfdb5a00437659660bf9644fad68aad6b686f05e06d3259d27a3397a6e"} Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.385553 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.479231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vclws\" (UniqueName: \"kubernetes.io/projected/08dd9777-beef-4e69-89b5-19901541212d-kube-api-access-vclws\") pod \"08dd9777-beef-4e69-89b5-19901541212d\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.479311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08dd9777-beef-4e69-89b5-19901541212d-config-volume\") pod \"08dd9777-beef-4e69-89b5-19901541212d\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.479385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08dd9777-beef-4e69-89b5-19901541212d-secret-volume\") pod \"08dd9777-beef-4e69-89b5-19901541212d\" (UID: \"08dd9777-beef-4e69-89b5-19901541212d\") " Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.480169 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08dd9777-beef-4e69-89b5-19901541212d-config-volume" (OuterVolumeSpecName: "config-volume") pod "08dd9777-beef-4e69-89b5-19901541212d" (UID: "08dd9777-beef-4e69-89b5-19901541212d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.484332 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dd9777-beef-4e69-89b5-19901541212d-kube-api-access-vclws" (OuterVolumeSpecName: "kube-api-access-vclws") pod "08dd9777-beef-4e69-89b5-19901541212d" (UID: "08dd9777-beef-4e69-89b5-19901541212d"). InnerVolumeSpecName "kube-api-access-vclws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.484312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd9777-beef-4e69-89b5-19901541212d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08dd9777-beef-4e69-89b5-19901541212d" (UID: "08dd9777-beef-4e69-89b5-19901541212d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.581009 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08dd9777-beef-4e69-89b5-19901541212d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.581066 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vclws\" (UniqueName: \"kubernetes.io/projected/08dd9777-beef-4e69-89b5-19901541212d-kube-api-access-vclws\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:03 crc kubenswrapper[4858]: I1122 08:45:03.581085 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08dd9777-beef-4e69-89b5-19901541212d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:04 crc kubenswrapper[4858]: I1122 08:45:04.156568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" event={"ID":"08dd9777-beef-4e69-89b5-19901541212d","Type":"ContainerDied","Data":"0722cf0db56da9248a7efbb8ec59d4b4865f18457c65a54683e213ed63c14f3c"} Nov 22 08:45:04 crc kubenswrapper[4858]: I1122 08:45:04.156661 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0722cf0db56da9248a7efbb8ec59d4b4865f18457c65a54683e213ed63c14f3c" Nov 22 08:45:04 crc kubenswrapper[4858]: I1122 08:45:04.156665 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm" Nov 22 08:45:04 crc kubenswrapper[4858]: I1122 08:45:04.449534 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt"] Nov 22 08:45:04 crc kubenswrapper[4858]: I1122 08:45:04.455149 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-jhsgt"] Nov 22 08:45:05 crc kubenswrapper[4858]: I1122 08:45:05.546626 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c6519f-81ff-402d-abb4-1dd51ba8a85c" path="/var/lib/kubelet/pods/43c6519f-81ff-402d-abb4-1dd51ba8a85c/volumes" Nov 22 08:45:32 crc kubenswrapper[4858]: I1122 08:45:32.869028 4858 scope.go:117] "RemoveContainer" containerID="8a895b353a5e3fd683763f205a08e337dc3cf9576ba69cd1ee05d9566036363d" Nov 22 08:46:45 crc kubenswrapper[4858]: I1122 08:46:45.311685 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:46:45 crc kubenswrapper[4858]: I1122 08:46:45.312166 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:47:15 crc kubenswrapper[4858]: I1122 08:47:15.311923 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:47:15 crc kubenswrapper[4858]: I1122 08:47:15.312896 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.312496 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.313387 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.313446 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.314514 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.314578 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" gracePeriod=600 Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.687251 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" exitCode=0 Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.687352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19"} Nov 22 08:47:45 crc kubenswrapper[4858]: I1122 08:47:45.687709 4858 scope.go:117] "RemoveContainer" containerID="c73d81d31b02a873c292f43dd4615dce7a42f77e1385a8b79c1bac25b88f895b" Nov 22 08:47:46 crc kubenswrapper[4858]: E1122 08:47:46.050409 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:47:46 crc kubenswrapper[4858]: I1122 08:47:46.698877 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:47:46 crc kubenswrapper[4858]: E1122 08:47:46.699180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:47:59 crc kubenswrapper[4858]: I1122 08:47:59.540417 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:47:59 crc kubenswrapper[4858]: E1122 08:47:59.541588 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:48:10 crc kubenswrapper[4858]: I1122 08:48:10.536719 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:48:10 crc kubenswrapper[4858]: E1122 08:48:10.537999 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.344031 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vkb6f"] Nov 22 08:48:12 crc kubenswrapper[4858]: E1122 08:48:12.344627 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dd9777-beef-4e69-89b5-19901541212d" containerName="collect-profiles" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.344647 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dd9777-beef-4e69-89b5-19901541212d" containerName="collect-profiles" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.344882 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dd9777-beef-4e69-89b5-19901541212d" containerName="collect-profiles" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.346818 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.359696 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkb6f"] Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.457756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-catalog-content\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.457819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-utilities\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.457869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9sv\" (UniqueName: \"kubernetes.io/projected/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-kube-api-access-nr9sv\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.560030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-catalog-content\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.560110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-utilities\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.560174 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr9sv\" (UniqueName: \"kubernetes.io/projected/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-kube-api-access-nr9sv\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.560665 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-catalog-content\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.560815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-utilities\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.581948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr9sv\" (UniqueName: \"kubernetes.io/projected/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-kube-api-access-nr9sv\") pod \"redhat-operators-vkb6f\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:12 crc kubenswrapper[4858]: I1122 08:48:12.668784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:13 crc kubenswrapper[4858]: I1122 08:48:13.009781 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkb6f"] Nov 22 08:48:13 crc kubenswrapper[4858]: I1122 08:48:13.950942 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerID="5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11" exitCode=0 Nov 22 08:48:13 crc kubenswrapper[4858]: I1122 08:48:13.951452 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerDied","Data":"5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11"} Nov 22 08:48:13 crc kubenswrapper[4858]: I1122 08:48:13.951517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerStarted","Data":"ab2bc1c7ec9cfe040f0af1deb39dd1b7c8ee8a8848f6cde782c9db3de0561db8"} Nov 22 08:48:14 crc kubenswrapper[4858]: I1122 08:48:14.962556 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerStarted","Data":"721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479"} Nov 22 08:48:15 crc kubenswrapper[4858]: I1122 08:48:15.973734 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerID="721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479" exitCode=0 Nov 22 08:48:15 crc kubenswrapper[4858]: I1122 08:48:15.973878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerDied","Data":"721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479"} Nov 22 08:48:17 crc kubenswrapper[4858]: I1122 08:48:17.994883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerStarted","Data":"fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92"} Nov 22 08:48:18 crc kubenswrapper[4858]: I1122 08:48:18.020339 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vkb6f" podStartSLOduration=2.88168756 podStartE2EDuration="6.020287943s" podCreationTimestamp="2025-11-22 08:48:12 +0000 UTC" firstStartedPulling="2025-11-22 08:48:13.956224023 +0000 UTC m=+5855.797647029" lastFinishedPulling="2025-11-22 08:48:17.094824406 +0000 UTC m=+5858.936247412" observedRunningTime="2025-11-22 08:48:18.016646677 +0000 UTC m=+5859.858069703" watchObservedRunningTime="2025-11-22 08:48:18.020287943 +0000 UTC m=+5859.861710949" Nov 22 08:48:21 crc kubenswrapper[4858]: I1122 08:48:21.536387 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:48:21 crc kubenswrapper[4858]: E1122 08:48:21.537383 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:48:22 crc kubenswrapper[4858]: I1122 08:48:22.669752 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:22 crc kubenswrapper[4858]: I1122 08:48:22.670348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:23 crc kubenswrapper[4858]: I1122 08:48:23.716186 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vkb6f" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="registry-server" probeResult="failure" output=< Nov 22 08:48:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 08:48:23 crc kubenswrapper[4858]: > Nov 22 08:48:32 crc kubenswrapper[4858]: I1122 08:48:32.726084 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:32 crc kubenswrapper[4858]: I1122 08:48:32.779043 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:32 crc kubenswrapper[4858]: I1122 08:48:32.968423 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkb6f"] Nov 22 08:48:34 crc kubenswrapper[4858]: I1122 08:48:34.127366 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vkb6f" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="registry-server" containerID="cri-o://fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92" gracePeriod=2 Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.096977 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.139698 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerID="fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92" exitCode=0 Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.140601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerDied","Data":"fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92"} Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.141237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkb6f" event={"ID":"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86","Type":"ContainerDied","Data":"ab2bc1c7ec9cfe040f0af1deb39dd1b7c8ee8a8848f6cde782c9db3de0561db8"} Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.141409 4858 scope.go:117] "RemoveContainer" containerID="fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.140635 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkb6f" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.163909 4858 scope.go:117] "RemoveContainer" containerID="721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.184414 4858 scope.go:117] "RemoveContainer" containerID="5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.223940 4858 scope.go:117] "RemoveContainer" containerID="fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92" Nov 22 08:48:35 crc kubenswrapper[4858]: E1122 08:48:35.224489 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92\": container with ID starting with fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92 not found: ID does not exist" containerID="fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.224578 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92"} err="failed to get container status \"fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92\": rpc error: code = NotFound desc = could not find container \"fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92\": container with ID starting with fbce3defd2400166189408446372c44b3db27f3ec84f6bb9ccdb71f1a99d6b92 not found: ID does not exist" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.224624 4858 scope.go:117] "RemoveContainer" containerID="721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479" Nov 22 08:48:35 crc kubenswrapper[4858]: E1122 08:48:35.225271 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479\": container with ID starting with 721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479 not found: ID does not exist" containerID="721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.225302 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479"} err="failed to get container status \"721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479\": rpc error: code = NotFound desc = could not find container \"721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479\": container with ID starting with 721ec6cc493298800c3437245043c723f4a9df83cc12d49d62c89bf4d5816479 not found: ID does not exist" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.225333 4858 scope.go:117] "RemoveContainer" containerID="5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11" Nov 22 08:48:35 crc kubenswrapper[4858]: E1122 08:48:35.225665 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11\": container with ID starting with 5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11 not found: ID does not exist" containerID="5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.225685 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11"} err="failed to get container status \"5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11\": rpc error: code = NotFound desc = could not find container \"5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11\": container with ID starting with 5045def7ce5613af7f1bac3040f227fe29843af64a023703dd9182ccf5d01e11 not found: ID does not exist" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.248746 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-catalog-content\") pod \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.248859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-utilities\") pod \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.248987 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr9sv\" (UniqueName: \"kubernetes.io/projected/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-kube-api-access-nr9sv\") pod \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\" (UID: \"2a0c5b06-b97a-4356-aa35-6d8a0c6aad86\") " Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.249974 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-utilities" (OuterVolumeSpecName: "utilities") pod "2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" (UID: "2a0c5b06-b97a-4356-aa35-6d8a0c6aad86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.255915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-kube-api-access-nr9sv" (OuterVolumeSpecName: "kube-api-access-nr9sv") pod "2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" (UID: "2a0c5b06-b97a-4356-aa35-6d8a0c6aad86"). InnerVolumeSpecName "kube-api-access-nr9sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.351593 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.351661 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr9sv\" (UniqueName: \"kubernetes.io/projected/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-kube-api-access-nr9sv\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.360611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" (UID: "2a0c5b06-b97a-4356-aa35-6d8a0c6aad86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.453244 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.484705 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkb6f"] Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.491256 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vkb6f"] Nov 22 08:48:35 crc kubenswrapper[4858]: I1122 08:48:35.546007 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" path="/var/lib/kubelet/pods/2a0c5b06-b97a-4356-aa35-6d8a0c6aad86/volumes" Nov 22 08:48:36 crc kubenswrapper[4858]: I1122 08:48:36.535564 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:48:36 crc kubenswrapper[4858]: E1122 08:48:36.536673 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:48:47 crc kubenswrapper[4858]: I1122 08:48:47.535238 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:48:47 crc kubenswrapper[4858]: E1122 08:48:47.536097 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:49:00 crc kubenswrapper[4858]: I1122 08:49:00.535509 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:49:00 crc kubenswrapper[4858]: E1122 08:49:00.536365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:49:13 crc kubenswrapper[4858]: I1122 08:49:13.535891 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:49:13 crc kubenswrapper[4858]: E1122 08:49:13.537032 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:49:25 crc kubenswrapper[4858]: I1122 08:49:25.536145 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:49:25 crc kubenswrapper[4858]: E1122 08:49:25.537456 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:49:39 crc kubenswrapper[4858]: I1122 08:49:39.535945 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:49:39 crc kubenswrapper[4858]: E1122 08:49:39.536603 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:49:50 crc kubenswrapper[4858]: I1122 08:49:50.536385 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:49:50 crc kubenswrapper[4858]: E1122 08:49:50.537167 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:50:03 crc kubenswrapper[4858]: I1122 08:50:03.536728 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:50:03 crc kubenswrapper[4858]: E1122 08:50:03.537805 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:50:18 crc kubenswrapper[4858]: I1122 08:50:18.536743 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:50:18 crc kubenswrapper[4858]: E1122 08:50:18.538013 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:50:30 crc kubenswrapper[4858]: I1122 08:50:30.536668 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:50:30 crc kubenswrapper[4858]: E1122 08:50:30.537789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:50:43 crc kubenswrapper[4858]: I1122 08:50:43.537077 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:50:43 crc kubenswrapper[4858]: E1122 08:50:43.538679 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:50:56 crc kubenswrapper[4858]: I1122 08:50:56.536610 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:50:56 crc kubenswrapper[4858]: E1122 08:50:56.537585 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:51:10 crc kubenswrapper[4858]: I1122 08:51:10.536691 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:51:10 crc kubenswrapper[4858]: E1122 08:51:10.538715 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:51:24 crc kubenswrapper[4858]: I1122 08:51:24.535620 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:51:24 crc kubenswrapper[4858]: E1122 08:51:24.536709 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:51:37 crc kubenswrapper[4858]: I1122 08:51:37.535599 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:51:37 crc kubenswrapper[4858]: E1122 08:51:37.536488 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:51:51 crc kubenswrapper[4858]: I1122 08:51:51.536116 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:51:51 crc kubenswrapper[4858]: E1122 08:51:51.537203 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:52:03 crc kubenswrapper[4858]: I1122 08:52:03.534978 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:52:03 crc kubenswrapper[4858]: E1122 08:52:03.535589 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:52:15 crc kubenswrapper[4858]: I1122 08:52:15.536669 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:52:15 crc kubenswrapper[4858]: E1122 08:52:15.537993 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:52:26 crc kubenswrapper[4858]: I1122 08:52:26.537313 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:52:26 crc kubenswrapper[4858]: E1122 08:52:26.538653 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.009739 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9vqkc"] Nov 22 08:52:37 crc kubenswrapper[4858]: E1122 08:52:37.010632 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="registry-server" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.010646 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="registry-server" Nov 22 08:52:37 crc kubenswrapper[4858]: E1122 08:52:37.010673 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="extract-content" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.010680 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="extract-content" Nov 22 08:52:37 crc kubenswrapper[4858]: E1122 08:52:37.010697 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="extract-utilities" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.010703 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="extract-utilities" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.010859 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0c5b06-b97a-4356-aa35-6d8a0c6aad86" containerName="registry-server" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.011961 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.033000 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vqkc"] Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.120780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-utilities\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.120857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77clv\" (UniqueName: \"kubernetes.io/projected/03dbe12c-ef83-41b2-bc8d-c821a0256a86-kube-api-access-77clv\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.121175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-catalog-content\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.222887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77clv\" (UniqueName: \"kubernetes.io/projected/03dbe12c-ef83-41b2-bc8d-c821a0256a86-kube-api-access-77clv\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.222978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-catalog-content\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.223068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-utilities\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.223656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-catalog-content\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.223691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-utilities\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.247405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77clv\" (UniqueName: \"kubernetes.io/projected/03dbe12c-ef83-41b2-bc8d-c821a0256a86-kube-api-access-77clv\") pod \"certified-operators-9vqkc\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.331405 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:37 crc kubenswrapper[4858]: I1122 08:52:37.829494 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vqkc"] Nov 22 08:52:38 crc kubenswrapper[4858]: I1122 08:52:38.380666 4858 generic.go:334] "Generic (PLEG): container finished" podID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerID="fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5" exitCode=0 Nov 22 08:52:38 crc kubenswrapper[4858]: I1122 08:52:38.380808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vqkc" event={"ID":"03dbe12c-ef83-41b2-bc8d-c821a0256a86","Type":"ContainerDied","Data":"fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5"} Nov 22 08:52:38 crc kubenswrapper[4858]: I1122 08:52:38.381061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vqkc" event={"ID":"03dbe12c-ef83-41b2-bc8d-c821a0256a86","Type":"ContainerStarted","Data":"5af4d46c19877c2b05420b07314e09b73bde5724ab4ea2707d7725ce0f13b970"} Nov 22 08:52:38 crc kubenswrapper[4858]: I1122 08:52:38.383204 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:52:39 crc kubenswrapper[4858]: I1122 08:52:39.540944 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:52:39 crc kubenswrapper[4858]: E1122 08:52:39.541981 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:52:40 crc kubenswrapper[4858]: I1122 08:52:40.406645 4858 generic.go:334] "Generic (PLEG): container finished" podID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerID="dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da" exitCode=0 Nov 22 08:52:40 crc kubenswrapper[4858]: I1122 08:52:40.406710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vqkc" event={"ID":"03dbe12c-ef83-41b2-bc8d-c821a0256a86","Type":"ContainerDied","Data":"dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da"} Nov 22 08:52:41 crc kubenswrapper[4858]: I1122 08:52:41.418534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vqkc" event={"ID":"03dbe12c-ef83-41b2-bc8d-c821a0256a86","Type":"ContainerStarted","Data":"0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655"} Nov 22 08:52:41 crc kubenswrapper[4858]: I1122 08:52:41.446831 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9vqkc" podStartSLOduration=2.913625048 podStartE2EDuration="5.446803305s" podCreationTimestamp="2025-11-22 08:52:36 +0000 UTC" firstStartedPulling="2025-11-22 08:52:38.382694155 +0000 UTC m=+6120.224117201" lastFinishedPulling="2025-11-22 08:52:40.915872442 +0000 UTC m=+6122.757295458" observedRunningTime="2025-11-22 08:52:41.437851619 +0000 UTC m=+6123.279274645" watchObservedRunningTime="2025-11-22 08:52:41.446803305 +0000 UTC m=+6123.288226311" Nov 22 08:52:47 crc kubenswrapper[4858]: I1122 08:52:47.332245 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:47 crc kubenswrapper[4858]: I1122 08:52:47.332970 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:47 crc kubenswrapper[4858]: I1122 08:52:47.383295 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:47 crc kubenswrapper[4858]: I1122 08:52:47.505127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:47 crc kubenswrapper[4858]: I1122 08:52:47.620161 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vqkc"] Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.480612 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9vqkc" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="registry-server" containerID="cri-o://0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655" gracePeriod=2 Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.868281 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.935382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-utilities\") pod \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.935523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77clv\" (UniqueName: \"kubernetes.io/projected/03dbe12c-ef83-41b2-bc8d-c821a0256a86-kube-api-access-77clv\") pod \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.936523 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-utilities" (OuterVolumeSpecName: "utilities") pod "03dbe12c-ef83-41b2-bc8d-c821a0256a86" (UID: "03dbe12c-ef83-41b2-bc8d-c821a0256a86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.936605 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-catalog-content\") pod \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\" (UID: \"03dbe12c-ef83-41b2-bc8d-c821a0256a86\") " Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.937848 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.942986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03dbe12c-ef83-41b2-bc8d-c821a0256a86-kube-api-access-77clv" (OuterVolumeSpecName: "kube-api-access-77clv") pod "03dbe12c-ef83-41b2-bc8d-c821a0256a86" (UID: "03dbe12c-ef83-41b2-bc8d-c821a0256a86"). InnerVolumeSpecName "kube-api-access-77clv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:52:49 crc kubenswrapper[4858]: I1122 08:52:49.994015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03dbe12c-ef83-41b2-bc8d-c821a0256a86" (UID: "03dbe12c-ef83-41b2-bc8d-c821a0256a86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.039073 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03dbe12c-ef83-41b2-bc8d-c821a0256a86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.039112 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77clv\" (UniqueName: \"kubernetes.io/projected/03dbe12c-ef83-41b2-bc8d-c821a0256a86-kube-api-access-77clv\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.488493 4858 generic.go:334] "Generic (PLEG): container finished" podID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerID="0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655" exitCode=0 Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.488543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vqkc" event={"ID":"03dbe12c-ef83-41b2-bc8d-c821a0256a86","Type":"ContainerDied","Data":"0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655"} Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.488559 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vqkc" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.488581 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vqkc" event={"ID":"03dbe12c-ef83-41b2-bc8d-c821a0256a86","Type":"ContainerDied","Data":"5af4d46c19877c2b05420b07314e09b73bde5724ab4ea2707d7725ce0f13b970"} Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.488606 4858 scope.go:117] "RemoveContainer" containerID="0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.512823 4858 scope.go:117] "RemoveContainer" containerID="dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.520096 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vqkc"] Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.530385 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9vqkc"] Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.536063 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.558421 4858 scope.go:117] "RemoveContainer" containerID="fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.585290 4858 scope.go:117] "RemoveContainer" containerID="0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655" Nov 22 08:52:50 crc kubenswrapper[4858]: E1122 08:52:50.585936 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655\": container with ID starting with 0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655 not found: ID does not exist" containerID="0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.585981 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655"} err="failed to get container status \"0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655\": rpc error: code = NotFound desc = could not find container \"0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655\": container with ID starting with 0d5ccda7970844f2ed886c74ba02f27969e426f8c94f72c75e1e0ed6a48da655 not found: ID does not exist" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.586012 4858 scope.go:117] "RemoveContainer" containerID="dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da" Nov 22 08:52:50 crc kubenswrapper[4858]: E1122 08:52:50.586570 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da\": container with ID starting with dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da not found: ID does not exist" containerID="dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.586610 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da"} err="failed to get container status \"dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da\": rpc error: code = NotFound desc = could not find container \"dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da\": container with ID starting with dbdd75836fca855382762fe0498bb16c55afb8ebdbdc4dc42d0e379d7e82a4da not found: ID does not exist" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.586636 4858 scope.go:117] "RemoveContainer" containerID="fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5" Nov 22 08:52:50 crc kubenswrapper[4858]: E1122 08:52:50.587289 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5\": container with ID starting with fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5 not found: ID does not exist" containerID="fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5" Nov 22 08:52:50 crc kubenswrapper[4858]: I1122 08:52:50.587345 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5"} err="failed to get container status \"fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5\": rpc error: code = NotFound desc = could not find container \"fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5\": container with ID starting with fe4a554ba81c397b405fe423034cc5eb426a3a7e80afee4222875f04dbcddaa5 not found: ID does not exist" Nov 22 08:52:51 crc kubenswrapper[4858]: I1122 08:52:51.501166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"cffcb1a03a2e1683a14547d2b8a4e0df58bd9b04d8b9073187d32ca6e3f20ec2"} Nov 22 08:52:51 crc kubenswrapper[4858]: I1122 08:52:51.547647 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" path="/var/lib/kubelet/pods/03dbe12c-ef83-41b2-bc8d-c821a0256a86/volumes" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.213281 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4tj9g"] Nov 22 08:54:21 crc kubenswrapper[4858]: E1122 08:54:21.214710 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="extract-content" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.214735 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="extract-content" Nov 22 08:54:21 crc kubenswrapper[4858]: E1122 08:54:21.214768 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="extract-utilities" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.214779 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="extract-utilities" Nov 22 08:54:21 crc kubenswrapper[4858]: E1122 08:54:21.214790 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="registry-server" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.214798 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="registry-server" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.215021 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="03dbe12c-ef83-41b2-bc8d-c821a0256a86" containerName="registry-server" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.216507 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.231499 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tj9g"] Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.302592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-utilities\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.302716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-catalog-content\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.303194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pg8\" (UniqueName: \"kubernetes.io/projected/c6051d40-11e9-440f-80ff-3e7cfdc61302-kube-api-access-96pg8\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.406250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-catalog-content\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.406444 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96pg8\" (UniqueName: \"kubernetes.io/projected/c6051d40-11e9-440f-80ff-3e7cfdc61302-kube-api-access-96pg8\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.406482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-utilities\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.406902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-catalog-content\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.406941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-utilities\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.432287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pg8\" (UniqueName: \"kubernetes.io/projected/c6051d40-11e9-440f-80ff-3e7cfdc61302-kube-api-access-96pg8\") pod \"redhat-marketplace-4tj9g\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:21 crc kubenswrapper[4858]: I1122 08:54:21.551855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:22 crc kubenswrapper[4858]: I1122 08:54:22.015518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tj9g"] Nov 22 08:54:22 crc kubenswrapper[4858]: I1122 08:54:22.236600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tj9g" event={"ID":"c6051d40-11e9-440f-80ff-3e7cfdc61302","Type":"ContainerStarted","Data":"85845192b988f62504e42bfb165177ce4856d69bfcc9bf2afaf4bea60852cd63"} Nov 22 08:54:23 crc kubenswrapper[4858]: I1122 08:54:23.248567 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerID="41b34967f70a16d04b13a0c6549a12e0064857f261d68dcaa31a2a6867e768ea" exitCode=0 Nov 22 08:54:23 crc kubenswrapper[4858]: I1122 08:54:23.248648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tj9g" event={"ID":"c6051d40-11e9-440f-80ff-3e7cfdc61302","Type":"ContainerDied","Data":"41b34967f70a16d04b13a0c6549a12e0064857f261d68dcaa31a2a6867e768ea"} Nov 22 08:54:26 crc kubenswrapper[4858]: I1122 08:54:26.274774 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerID="324675064fd0925d4785151b441621571e6db33f5bdb0c0b289c81180a4f6266" exitCode=0 Nov 22 08:54:26 crc kubenswrapper[4858]: I1122 08:54:26.274827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tj9g" event={"ID":"c6051d40-11e9-440f-80ff-3e7cfdc61302","Type":"ContainerDied","Data":"324675064fd0925d4785151b441621571e6db33f5bdb0c0b289c81180a4f6266"} Nov 22 08:54:29 crc kubenswrapper[4858]: I1122 08:54:29.302771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tj9g" event={"ID":"c6051d40-11e9-440f-80ff-3e7cfdc61302","Type":"ContainerStarted","Data":"ea532b7698d5e482549e6f67d52b617ace1fe92479fd8c356d85fd02b6e564f6"} Nov 22 08:54:29 crc kubenswrapper[4858]: I1122 08:54:29.327705 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4tj9g" podStartSLOduration=3.169239565 podStartE2EDuration="8.327679731s" podCreationTimestamp="2025-11-22 08:54:21 +0000 UTC" firstStartedPulling="2025-11-22 08:54:23.250525154 +0000 UTC m=+6225.091948200" lastFinishedPulling="2025-11-22 08:54:28.40896535 +0000 UTC m=+6230.250388366" observedRunningTime="2025-11-22 08:54:29.323491446 +0000 UTC m=+6231.164914452" watchObservedRunningTime="2025-11-22 08:54:29.327679731 +0000 UTC m=+6231.169102737" Nov 22 08:54:31 crc kubenswrapper[4858]: I1122 08:54:31.553963 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:31 crc kubenswrapper[4858]: I1122 08:54:31.554494 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:31 crc kubenswrapper[4858]: I1122 08:54:31.611854 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:41 crc kubenswrapper[4858]: I1122 08:54:41.597656 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:41 crc kubenswrapper[4858]: I1122 08:54:41.646545 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tj9g"] Nov 22 08:54:42 crc kubenswrapper[4858]: I1122 08:54:42.406582 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4tj9g" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="registry-server" containerID="cri-o://ea532b7698d5e482549e6f67d52b617ace1fe92479fd8c356d85fd02b6e564f6" gracePeriod=2 Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.420205 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerID="ea532b7698d5e482549e6f67d52b617ace1fe92479fd8c356d85fd02b6e564f6" exitCode=0 Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.420426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tj9g" event={"ID":"c6051d40-11e9-440f-80ff-3e7cfdc61302","Type":"ContainerDied","Data":"ea532b7698d5e482549e6f67d52b617ace1fe92479fd8c356d85fd02b6e564f6"} Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.545838 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.556653 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-catalog-content\") pod \"c6051d40-11e9-440f-80ff-3e7cfdc61302\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.556697 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96pg8\" (UniqueName: \"kubernetes.io/projected/c6051d40-11e9-440f-80ff-3e7cfdc61302-kube-api-access-96pg8\") pod \"c6051d40-11e9-440f-80ff-3e7cfdc61302\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.556933 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-utilities\") pod \"c6051d40-11e9-440f-80ff-3e7cfdc61302\" (UID: \"c6051d40-11e9-440f-80ff-3e7cfdc61302\") " Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.558682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-utilities" (OuterVolumeSpecName: "utilities") pod "c6051d40-11e9-440f-80ff-3e7cfdc61302" (UID: "c6051d40-11e9-440f-80ff-3e7cfdc61302"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.577063 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6051d40-11e9-440f-80ff-3e7cfdc61302-kube-api-access-96pg8" (OuterVolumeSpecName: "kube-api-access-96pg8") pod "c6051d40-11e9-440f-80ff-3e7cfdc61302" (UID: "c6051d40-11e9-440f-80ff-3e7cfdc61302"). InnerVolumeSpecName "kube-api-access-96pg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.581377 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6051d40-11e9-440f-80ff-3e7cfdc61302" (UID: "c6051d40-11e9-440f-80ff-3e7cfdc61302"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.659391 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.659435 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96pg8\" (UniqueName: \"kubernetes.io/projected/c6051d40-11e9-440f-80ff-3e7cfdc61302-kube-api-access-96pg8\") on node \"crc\" DevicePath \"\"" Nov 22 08:54:43 crc kubenswrapper[4858]: I1122 08:54:43.659446 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6051d40-11e9-440f-80ff-3e7cfdc61302-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.432667 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tj9g" event={"ID":"c6051d40-11e9-440f-80ff-3e7cfdc61302","Type":"ContainerDied","Data":"85845192b988f62504e42bfb165177ce4856d69bfcc9bf2afaf4bea60852cd63"} Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.432751 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tj9g" Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.432761 4858 scope.go:117] "RemoveContainer" containerID="ea532b7698d5e482549e6f67d52b617ace1fe92479fd8c356d85fd02b6e564f6" Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.462395 4858 scope.go:117] "RemoveContainer" containerID="324675064fd0925d4785151b441621571e6db33f5bdb0c0b289c81180a4f6266" Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.476624 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tj9g"] Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.485933 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tj9g"] Nov 22 08:54:44 crc kubenswrapper[4858]: I1122 08:54:44.510202 4858 scope.go:117] "RemoveContainer" containerID="41b34967f70a16d04b13a0c6549a12e0064857f261d68dcaa31a2a6867e768ea" Nov 22 08:54:45 crc kubenswrapper[4858]: I1122 08:54:45.547333 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" path="/var/lib/kubelet/pods/c6051d40-11e9-440f-80ff-3e7cfdc61302/volumes" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.308208 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2z2dr"] Nov 22 08:55:12 crc kubenswrapper[4858]: E1122 08:55:12.309442 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="extract-utilities" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.309459 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="extract-utilities" Nov 22 08:55:12 crc kubenswrapper[4858]: E1122 08:55:12.309489 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="extract-content" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.309497 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="extract-content" Nov 22 08:55:12 crc kubenswrapper[4858]: E1122 08:55:12.309511 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="registry-server" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.309517 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="registry-server" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.309699 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6051d40-11e9-440f-80ff-3e7cfdc61302" containerName="registry-server" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.310882 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2z2dr"] Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.310993 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.435021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-utilities\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.435086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5qp\" (UniqueName: \"kubernetes.io/projected/04a00140-9e67-4496-ac4d-795e90a900f3-kube-api-access-vl5qp\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.435109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-catalog-content\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.536230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl5qp\" (UniqueName: \"kubernetes.io/projected/04a00140-9e67-4496-ac4d-795e90a900f3-kube-api-access-vl5qp\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.536290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-catalog-content\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.536415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-utilities\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.537001 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-utilities\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.537188 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-catalog-content\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.559400 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl5qp\" (UniqueName: \"kubernetes.io/projected/04a00140-9e67-4496-ac4d-795e90a900f3-kube-api-access-vl5qp\") pod \"community-operators-2z2dr\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:12 crc kubenswrapper[4858]: I1122 08:55:12.628614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:13 crc kubenswrapper[4858]: I1122 08:55:13.104364 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2z2dr"] Nov 22 08:55:14 crc kubenswrapper[4858]: I1122 08:55:14.158373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2dr" event={"ID":"04a00140-9e67-4496-ac4d-795e90a900f3","Type":"ContainerStarted","Data":"36a706ff245cd72232a9132d9cd3e5eb5dc0e6d1bed61a8589879a6269247a3a"} Nov 22 08:55:15 crc kubenswrapper[4858]: I1122 08:55:15.163816 4858 generic.go:334] "Generic (PLEG): container finished" podID="04a00140-9e67-4496-ac4d-795e90a900f3" containerID="6ddd17dc6e512eace266352fbb468a162b1088fbeccd8140e911208e896b4096" exitCode=0 Nov 22 08:55:15 crc kubenswrapper[4858]: I1122 08:55:15.163979 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2dr" event={"ID":"04a00140-9e67-4496-ac4d-795e90a900f3","Type":"ContainerDied","Data":"6ddd17dc6e512eace266352fbb468a162b1088fbeccd8140e911208e896b4096"} Nov 22 08:55:15 crc kubenswrapper[4858]: I1122 08:55:15.311988 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:55:15 crc kubenswrapper[4858]: I1122 08:55:15.312076 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:55:19 crc kubenswrapper[4858]: I1122 08:55:19.203472 4858 generic.go:334] "Generic (PLEG): container finished" podID="04a00140-9e67-4496-ac4d-795e90a900f3" containerID="4e333495f5f050df0ff5c61ab612e3170f5ac552d82ab87a7be9f7d415c92e1f" exitCode=0 Nov 22 08:55:19 crc kubenswrapper[4858]: I1122 08:55:19.204241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2dr" event={"ID":"04a00140-9e67-4496-ac4d-795e90a900f3","Type":"ContainerDied","Data":"4e333495f5f050df0ff5c61ab612e3170f5ac552d82ab87a7be9f7d415c92e1f"} Nov 22 08:55:21 crc kubenswrapper[4858]: I1122 08:55:21.222478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2dr" event={"ID":"04a00140-9e67-4496-ac4d-795e90a900f3","Type":"ContainerStarted","Data":"9f06b812659f0006f51d3eee22121fdcf87620455f235f64b886cca533e36f1c"} Nov 22 08:55:21 crc kubenswrapper[4858]: I1122 08:55:21.248301 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2z2dr" podStartSLOduration=4.7685879060000005 podStartE2EDuration="10.248281265s" podCreationTimestamp="2025-11-22 08:55:11 +0000 UTC" firstStartedPulling="2025-11-22 08:55:15.16749818 +0000 UTC m=+6277.008921226" lastFinishedPulling="2025-11-22 08:55:20.647191579 +0000 UTC m=+6282.488614585" observedRunningTime="2025-11-22 08:55:21.241466166 +0000 UTC m=+6283.082889182" watchObservedRunningTime="2025-11-22 08:55:21.248281265 +0000 UTC m=+6283.089704271" Nov 22 08:55:22 crc kubenswrapper[4858]: I1122 08:55:22.629331 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:22 crc kubenswrapper[4858]: I1122 08:55:22.629672 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:23 crc kubenswrapper[4858]: I1122 08:55:23.671847 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2z2dr" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="registry-server" probeResult="failure" output=< Nov 22 08:55:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 08:55:23 crc kubenswrapper[4858]: > Nov 22 08:55:32 crc kubenswrapper[4858]: I1122 08:55:32.677888 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:32 crc kubenswrapper[4858]: I1122 08:55:32.728598 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:32 crc kubenswrapper[4858]: I1122 08:55:32.919123 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2z2dr"] Nov 22 08:55:34 crc kubenswrapper[4858]: I1122 08:55:34.345730 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2z2dr" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="registry-server" containerID="cri-o://9f06b812659f0006f51d3eee22121fdcf87620455f235f64b886cca533e36f1c" gracePeriod=2 Nov 22 08:55:35 crc kubenswrapper[4858]: I1122 08:55:35.360212 4858 generic.go:334] "Generic (PLEG): container finished" podID="04a00140-9e67-4496-ac4d-795e90a900f3" containerID="9f06b812659f0006f51d3eee22121fdcf87620455f235f64b886cca533e36f1c" exitCode=0 Nov 22 08:55:35 crc kubenswrapper[4858]: I1122 08:55:35.360342 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2dr" event={"ID":"04a00140-9e67-4496-ac4d-795e90a900f3","Type":"ContainerDied","Data":"9f06b812659f0006f51d3eee22121fdcf87620455f235f64b886cca533e36f1c"} Nov 22 08:55:35 crc kubenswrapper[4858]: I1122 08:55:35.915522 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.081913 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-catalog-content\") pod \"04a00140-9e67-4496-ac4d-795e90a900f3\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.081971 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-utilities\") pod \"04a00140-9e67-4496-ac4d-795e90a900f3\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.082076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl5qp\" (UniqueName: \"kubernetes.io/projected/04a00140-9e67-4496-ac4d-795e90a900f3-kube-api-access-vl5qp\") pod \"04a00140-9e67-4496-ac4d-795e90a900f3\" (UID: \"04a00140-9e67-4496-ac4d-795e90a900f3\") " Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.083092 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-utilities" (OuterVolumeSpecName: "utilities") pod "04a00140-9e67-4496-ac4d-795e90a900f3" (UID: "04a00140-9e67-4496-ac4d-795e90a900f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.091372 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a00140-9e67-4496-ac4d-795e90a900f3-kube-api-access-vl5qp" (OuterVolumeSpecName: "kube-api-access-vl5qp") pod "04a00140-9e67-4496-ac4d-795e90a900f3" (UID: "04a00140-9e67-4496-ac4d-795e90a900f3"). InnerVolumeSpecName "kube-api-access-vl5qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.143384 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04a00140-9e67-4496-ac4d-795e90a900f3" (UID: "04a00140-9e67-4496-ac4d-795e90a900f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.186875 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl5qp\" (UniqueName: \"kubernetes.io/projected/04a00140-9e67-4496-ac4d-795e90a900f3-kube-api-access-vl5qp\") on node \"crc\" DevicePath \"\"" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.186957 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.186986 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04a00140-9e67-4496-ac4d-795e90a900f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.375169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2dr" event={"ID":"04a00140-9e67-4496-ac4d-795e90a900f3","Type":"ContainerDied","Data":"36a706ff245cd72232a9132d9cd3e5eb5dc0e6d1bed61a8589879a6269247a3a"} Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.375262 4858 scope.go:117] "RemoveContainer" containerID="9f06b812659f0006f51d3eee22121fdcf87620455f235f64b886cca533e36f1c" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.375429 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2dr" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.397438 4858 scope.go:117] "RemoveContainer" containerID="4e333495f5f050df0ff5c61ab612e3170f5ac552d82ab87a7be9f7d415c92e1f" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.429819 4858 scope.go:117] "RemoveContainer" containerID="6ddd17dc6e512eace266352fbb468a162b1088fbeccd8140e911208e896b4096" Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.433234 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2z2dr"] Nov 22 08:55:36 crc kubenswrapper[4858]: I1122 08:55:36.442197 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2z2dr"] Nov 22 08:55:37 crc kubenswrapper[4858]: I1122 08:55:37.545703 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" path="/var/lib/kubelet/pods/04a00140-9e67-4496-ac4d-795e90a900f3/volumes" Nov 22 08:55:45 crc kubenswrapper[4858]: I1122 08:55:45.312341 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:55:45 crc kubenswrapper[4858]: I1122 08:55:45.312858 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.312586 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.313561 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.313635 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.314624 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cffcb1a03a2e1683a14547d2b8a4e0df58bd9b04d8b9073187d32ca6e3f20ec2"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.314707 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://cffcb1a03a2e1683a14547d2b8a4e0df58bd9b04d8b9073187d32ca6e3f20ec2" gracePeriod=600 Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.807003 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="cffcb1a03a2e1683a14547d2b8a4e0df58bd9b04d8b9073187d32ca6e3f20ec2" exitCode=0 Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.807123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"cffcb1a03a2e1683a14547d2b8a4e0df58bd9b04d8b9073187d32ca6e3f20ec2"} Nov 22 08:56:15 crc kubenswrapper[4858]: I1122 08:56:15.807643 4858 scope.go:117] "RemoveContainer" containerID="85f049099ea37796cf6a75aa260083ddac9383172ed1e0d3cc722a240f75ab19" Nov 22 08:56:16 crc kubenswrapper[4858]: I1122 08:56:16.821512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356"} Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.739674 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dgrcw"] Nov 22 08:58:31 crc kubenswrapper[4858]: E1122 08:58:31.740951 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="extract-content" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.740969 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="extract-content" Nov 22 08:58:31 crc kubenswrapper[4858]: E1122 08:58:31.740998 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="registry-server" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.741007 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="registry-server" Nov 22 08:58:31 crc kubenswrapper[4858]: E1122 08:58:31.741018 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="extract-utilities" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.741027 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="extract-utilities" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.741197 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a00140-9e67-4496-ac4d-795e90a900f3" containerName="registry-server" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.743214 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.752924 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgrcw"] Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.843147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-utilities\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.843295 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-catalog-content\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.843394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs82n\" (UniqueName: \"kubernetes.io/projected/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-kube-api-access-zs82n\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.945811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-catalog-content\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.945904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs82n\" (UniqueName: \"kubernetes.io/projected/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-kube-api-access-zs82n\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.946024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-utilities\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.946492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-catalog-content\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.946556 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-utilities\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:31 crc kubenswrapper[4858]: I1122 08:58:31.970798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs82n\" (UniqueName: \"kubernetes.io/projected/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-kube-api-access-zs82n\") pod \"redhat-operators-dgrcw\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:32 crc kubenswrapper[4858]: I1122 08:58:32.066191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:32 crc kubenswrapper[4858]: I1122 08:58:32.539254 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgrcw"] Nov 22 08:58:33 crc kubenswrapper[4858]: I1122 08:58:33.148949 4858 generic.go:334] "Generic (PLEG): container finished" podID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerID="abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669" exitCode=0 Nov 22 08:58:33 crc kubenswrapper[4858]: I1122 08:58:33.149021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerDied","Data":"abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669"} Nov 22 08:58:33 crc kubenswrapper[4858]: I1122 08:58:33.149069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerStarted","Data":"f2950a750a620147c5cb1423c2a63b4033a45561cf00c4d78c364aa500c99c1c"} Nov 22 08:58:33 crc kubenswrapper[4858]: I1122 08:58:33.150963 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:58:34 crc kubenswrapper[4858]: I1122 08:58:34.162903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerStarted","Data":"09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e"} Nov 22 08:58:35 crc kubenswrapper[4858]: I1122 08:58:35.174290 4858 generic.go:334] "Generic (PLEG): container finished" podID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerID="09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e" exitCode=0 Nov 22 08:58:35 crc kubenswrapper[4858]: I1122 08:58:35.174356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerDied","Data":"09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e"} Nov 22 08:58:38 crc kubenswrapper[4858]: I1122 08:58:38.199409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerStarted","Data":"e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c"} Nov 22 08:58:38 crc kubenswrapper[4858]: I1122 08:58:38.228122 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dgrcw" podStartSLOduration=3.338991711 podStartE2EDuration="7.228100748s" podCreationTimestamp="2025-11-22 08:58:31 +0000 UTC" firstStartedPulling="2025-11-22 08:58:33.150690774 +0000 UTC m=+6474.992113790" lastFinishedPulling="2025-11-22 08:58:37.039799821 +0000 UTC m=+6478.881222827" observedRunningTime="2025-11-22 08:58:38.222480728 +0000 UTC m=+6480.063903734" watchObservedRunningTime="2025-11-22 08:58:38.228100748 +0000 UTC m=+6480.069523754" Nov 22 08:58:42 crc kubenswrapper[4858]: I1122 08:58:42.066978 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:42 crc kubenswrapper[4858]: I1122 08:58:42.067538 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:43 crc kubenswrapper[4858]: I1122 08:58:43.120456 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dgrcw" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="registry-server" probeResult="failure" output=< Nov 22 08:58:43 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 08:58:43 crc kubenswrapper[4858]: > Nov 22 08:58:45 crc kubenswrapper[4858]: I1122 08:58:45.312834 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:58:45 crc kubenswrapper[4858]: I1122 08:58:45.314427 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:58:52 crc kubenswrapper[4858]: I1122 08:58:52.115145 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:52 crc kubenswrapper[4858]: I1122 08:58:52.159933 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:52 crc kubenswrapper[4858]: I1122 08:58:52.348641 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgrcw"] Nov 22 08:58:53 crc kubenswrapper[4858]: I1122 08:58:53.316608 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dgrcw" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="registry-server" containerID="cri-o://e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c" gracePeriod=2 Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.314527 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.326811 4858 generic.go:334] "Generic (PLEG): container finished" podID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerID="e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c" exitCode=0 Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.326862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerDied","Data":"e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c"} Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.326879 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrcw" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.326902 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrcw" event={"ID":"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa","Type":"ContainerDied","Data":"f2950a750a620147c5cb1423c2a63b4033a45561cf00c4d78c364aa500c99c1c"} Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.326920 4858 scope.go:117] "RemoveContainer" containerID="e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.352697 4858 scope.go:117] "RemoveContainer" containerID="09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.373801 4858 scope.go:117] "RemoveContainer" containerID="abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.395676 4858 scope.go:117] "RemoveContainer" containerID="e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c" Nov 22 08:58:54 crc kubenswrapper[4858]: E1122 08:58:54.396209 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c\": container with ID starting with e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c not found: ID does not exist" containerID="e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.396298 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c"} err="failed to get container status \"e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c\": rpc error: code = NotFound desc = could not find container \"e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c\": container with ID starting with e65a0e0328f4cb8e4a78a265fa395f4f4576da1955cf78ad29804f336e1df35c not found: ID does not exist" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.396367 4858 scope.go:117] "RemoveContainer" containerID="09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e" Nov 22 08:58:54 crc kubenswrapper[4858]: E1122 08:58:54.396747 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e\": container with ID starting with 09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e not found: ID does not exist" containerID="09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.396779 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e"} err="failed to get container status \"09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e\": rpc error: code = NotFound desc = could not find container \"09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e\": container with ID starting with 09b01255a0edd95b1e6b149bc41dabf77f7ebf4566bf99e9a39f94a141d0263e not found: ID does not exist" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.396807 4858 scope.go:117] "RemoveContainer" containerID="abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669" Nov 22 08:58:54 crc kubenswrapper[4858]: E1122 08:58:54.397107 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669\": container with ID starting with abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669 not found: ID does not exist" containerID="abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.397153 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669"} err="failed to get container status \"abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669\": rpc error: code = NotFound desc = could not find container \"abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669\": container with ID starting with abb5ab3939a94e3a849ccec159bc43792fc34ef290c6df34a0f8cc70f3d94669 not found: ID does not exist" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.510222 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-catalog-content\") pod \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.510392 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs82n\" (UniqueName: \"kubernetes.io/projected/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-kube-api-access-zs82n\") pod \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.510558 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-utilities\") pod \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\" (UID: \"bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa\") " Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.512031 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-utilities" (OuterVolumeSpecName: "utilities") pod "bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" (UID: "bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.516145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-kube-api-access-zs82n" (OuterVolumeSpecName: "kube-api-access-zs82n") pod "bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" (UID: "bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa"). InnerVolumeSpecName "kube-api-access-zs82n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.603152 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" (UID: "bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.611710 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.611738 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs82n\" (UniqueName: \"kubernetes.io/projected/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-kube-api-access-zs82n\") on node \"crc\" DevicePath \"\"" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.611748 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.664777 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgrcw"] Nov 22 08:58:54 crc kubenswrapper[4858]: I1122 08:58:54.671111 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dgrcw"] Nov 22 08:58:55 crc kubenswrapper[4858]: I1122 08:58:55.548445 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" path="/var/lib/kubelet/pods/bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa/volumes" Nov 22 08:59:15 crc kubenswrapper[4858]: I1122 08:59:15.312103 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:59:15 crc kubenswrapper[4858]: I1122 08:59:15.312645 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:59:45 crc kubenswrapper[4858]: I1122 08:59:45.312419 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:59:45 crc kubenswrapper[4858]: I1122 08:59:45.313199 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:59:45 crc kubenswrapper[4858]: I1122 08:59:45.313281 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 08:59:45 crc kubenswrapper[4858]: I1122 08:59:45.314396 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:59:45 crc kubenswrapper[4858]: I1122 08:59:45.314489 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" gracePeriod=600 Nov 22 08:59:46 crc kubenswrapper[4858]: I1122 08:59:46.012957 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" exitCode=0 Nov 22 08:59:46 crc kubenswrapper[4858]: I1122 08:59:46.013006 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356"} Nov 22 08:59:46 crc kubenswrapper[4858]: I1122 08:59:46.013050 4858 scope.go:117] "RemoveContainer" containerID="cffcb1a03a2e1683a14547d2b8a4e0df58bd9b04d8b9073187d32ca6e3f20ec2" Nov 22 08:59:46 crc kubenswrapper[4858]: E1122 08:59:46.087844 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 08:59:47 crc kubenswrapper[4858]: I1122 08:59:47.027821 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 08:59:47 crc kubenswrapper[4858]: E1122 08:59:47.028165 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.203936 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh"] Nov 22 09:00:00 crc kubenswrapper[4858]: E1122 09:00:00.205073 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.212521 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4858]: E1122 09:00:00.212588 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.212597 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4858]: E1122 09:00:00.212663 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.212672 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.213104 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf50699e-f3c5-4f08-a4b4-3f0a8802e3aa" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.213832 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.218523 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh"] Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.219425 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.219649 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.325818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-secret-volume\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.325876 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-config-volume\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.325900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tncg9\" (UniqueName: \"kubernetes.io/projected/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-kube-api-access-tncg9\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.427940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-config-volume\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.428058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tncg9\" (UniqueName: \"kubernetes.io/projected/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-kube-api-access-tncg9\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.428796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-secret-volume\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.430465 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-config-volume\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.438192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-secret-volume\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.451316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tncg9\" (UniqueName: \"kubernetes.io/projected/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-kube-api-access-tncg9\") pod \"collect-profiles-29396700-6mvkh\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.546158 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:00 crc kubenswrapper[4858]: I1122 09:00:00.992100 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh"] Nov 22 09:00:01 crc kubenswrapper[4858]: I1122 09:00:01.153457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" event={"ID":"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb","Type":"ContainerStarted","Data":"53bda560240b5d577c721ed2664289f440d428537bb180d94ac73a0dc750dee3"} Nov 22 09:00:01 crc kubenswrapper[4858]: I1122 09:00:01.536902 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:00:01 crc kubenswrapper[4858]: E1122 09:00:01.537269 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:00:02 crc kubenswrapper[4858]: I1122 09:00:02.163637 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" event={"ID":"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb","Type":"ContainerStarted","Data":"dac0d5d763724def397e39a2cea6bf3d318eb53eb26706b09afb7afe75e284f0"} Nov 22 09:00:02 crc kubenswrapper[4858]: I1122 09:00:02.189608 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" podStartSLOduration=2.189579833 podStartE2EDuration="2.189579833s" podCreationTimestamp="2025-11-22 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:00:02.183157458 +0000 UTC m=+6564.024580464" watchObservedRunningTime="2025-11-22 09:00:02.189579833 +0000 UTC m=+6564.031002839" Nov 22 09:00:03 crc kubenswrapper[4858]: I1122 09:00:03.176674 4858 generic.go:334] "Generic (PLEG): container finished" podID="8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" containerID="dac0d5d763724def397e39a2cea6bf3d318eb53eb26706b09afb7afe75e284f0" exitCode=0 Nov 22 09:00:03 crc kubenswrapper[4858]: I1122 09:00:03.176790 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" event={"ID":"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb","Type":"ContainerDied","Data":"dac0d5d763724def397e39a2cea6bf3d318eb53eb26706b09afb7afe75e284f0"} Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.545384 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.606739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-secret-volume\") pod \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.606845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tncg9\" (UniqueName: \"kubernetes.io/projected/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-kube-api-access-tncg9\") pod \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.607050 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-config-volume\") pod \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\" (UID: \"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb\") " Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.608054 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-config-volume" (OuterVolumeSpecName: "config-volume") pod "8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" (UID: "8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.614775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-kube-api-access-tncg9" (OuterVolumeSpecName: "kube-api-access-tncg9") pod "8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" (UID: "8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb"). InnerVolumeSpecName "kube-api-access-tncg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.614834 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" (UID: "8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.709233 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.709280 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:04 crc kubenswrapper[4858]: I1122 09:00:04.709292 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tncg9\" (UniqueName: \"kubernetes.io/projected/8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb-kube-api-access-tncg9\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:05 crc kubenswrapper[4858]: I1122 09:00:05.192661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" event={"ID":"8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb","Type":"ContainerDied","Data":"53bda560240b5d577c721ed2664289f440d428537bb180d94ac73a0dc750dee3"} Nov 22 09:00:05 crc kubenswrapper[4858]: I1122 09:00:05.192714 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bda560240b5d577c721ed2664289f440d428537bb180d94ac73a0dc750dee3" Nov 22 09:00:05 crc kubenswrapper[4858]: I1122 09:00:05.192714 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-6mvkh" Nov 22 09:00:05 crc kubenswrapper[4858]: I1122 09:00:05.285379 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx"] Nov 22 09:00:05 crc kubenswrapper[4858]: I1122 09:00:05.292068 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-7chsx"] Nov 22 09:00:05 crc kubenswrapper[4858]: I1122 09:00:05.552190 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ebfdfb9-2131-4121-9dae-064a4b885a05" path="/var/lib/kubelet/pods/3ebfdfb9-2131-4121-9dae-064a4b885a05/volumes" Nov 22 09:00:16 crc kubenswrapper[4858]: I1122 09:00:16.535862 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:00:16 crc kubenswrapper[4858]: E1122 09:00:16.536781 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:00:29 crc kubenswrapper[4858]: I1122 09:00:29.540454 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:00:29 crc kubenswrapper[4858]: E1122 09:00:29.541385 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:00:33 crc kubenswrapper[4858]: I1122 09:00:33.210750 4858 scope.go:117] "RemoveContainer" containerID="496ed9a2a1df605e0e7725217c93c999e6e2f725fa8f67414f1fbf259bf00721" Nov 22 09:00:44 crc kubenswrapper[4858]: I1122 09:00:44.535759 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:00:44 crc kubenswrapper[4858]: E1122 09:00:44.536711 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:00:57 crc kubenswrapper[4858]: I1122 09:00:57.536253 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:00:57 crc kubenswrapper[4858]: E1122 09:00:57.537288 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:01:08 crc kubenswrapper[4858]: I1122 09:01:08.535435 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:01:08 crc kubenswrapper[4858]: E1122 09:01:08.536629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:01:19 crc kubenswrapper[4858]: I1122 09:01:19.540640 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:01:19 crc kubenswrapper[4858]: E1122 09:01:19.541587 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:01:32 crc kubenswrapper[4858]: I1122 09:01:32.535639 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:01:32 crc kubenswrapper[4858]: E1122 09:01:32.536691 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:01:47 crc kubenswrapper[4858]: I1122 09:01:47.539867 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:01:47 crc kubenswrapper[4858]: E1122 09:01:47.540659 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:01:59 crc kubenswrapper[4858]: I1122 09:01:59.542940 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:01:59 crc kubenswrapper[4858]: E1122 09:01:59.544476 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:02:10 crc kubenswrapper[4858]: I1122 09:02:10.536450 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:02:10 crc kubenswrapper[4858]: E1122 09:02:10.537454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:02:23 crc kubenswrapper[4858]: I1122 09:02:23.536232 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:02:23 crc kubenswrapper[4858]: E1122 09:02:23.537398 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:02:35 crc kubenswrapper[4858]: I1122 09:02:35.536260 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:02:35 crc kubenswrapper[4858]: E1122 09:02:35.537582 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:02:49 crc kubenswrapper[4858]: I1122 09:02:49.547285 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:02:49 crc kubenswrapper[4858]: E1122 09:02:49.548238 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:03:04 crc kubenswrapper[4858]: I1122 09:03:04.535395 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:03:04 crc kubenswrapper[4858]: E1122 09:03:04.536063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:03:17 crc kubenswrapper[4858]: I1122 09:03:17.536539 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:03:17 crc kubenswrapper[4858]: E1122 09:03:17.537770 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:03:31 crc kubenswrapper[4858]: I1122 09:03:31.536009 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:03:31 crc kubenswrapper[4858]: E1122 09:03:31.537156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:03:43 crc kubenswrapper[4858]: I1122 09:03:43.537692 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:03:43 crc kubenswrapper[4858]: E1122 09:03:43.538632 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.152634 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4jmgt"] Nov 22 09:03:48 crc kubenswrapper[4858]: E1122 09:03:48.153850 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" containerName="collect-profiles" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.153867 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" containerName="collect-profiles" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.154057 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf9c11a-b3d2-4d9f-85d7-4d73908d64eb" containerName="collect-profiles" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.155283 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.172964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-utilities\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.173008 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jmgt"] Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.173121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-catalog-content\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.173180 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fbq\" (UniqueName: \"kubernetes.io/projected/9f630f83-63a5-41aa-8a7d-ce6dace404ab-kube-api-access-q8fbq\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.274440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8fbq\" (UniqueName: \"kubernetes.io/projected/9f630f83-63a5-41aa-8a7d-ce6dace404ab-kube-api-access-q8fbq\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.274548 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-utilities\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.274635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-catalog-content\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.275565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-catalog-content\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.276938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-utilities\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.299497 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8fbq\" (UniqueName: \"kubernetes.io/projected/9f630f83-63a5-41aa-8a7d-ce6dace404ab-kube-api-access-q8fbq\") pod \"certified-operators-4jmgt\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.485499 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:48 crc kubenswrapper[4858]: I1122 09:03:48.981507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jmgt"] Nov 22 09:03:49 crc kubenswrapper[4858]: I1122 09:03:49.247275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jmgt" event={"ID":"9f630f83-63a5-41aa-8a7d-ce6dace404ab","Type":"ContainerStarted","Data":"3590e69dba276347b5feae889c539c69ef031b833f1212988abef1d26dccf1d8"} Nov 22 09:03:50 crc kubenswrapper[4858]: I1122 09:03:50.256673 4858 generic.go:334] "Generic (PLEG): container finished" podID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerID="dfd3094554edbfe068768cbc289eb034a72db8e7e6b455329488bdafa7f1b3ae" exitCode=0 Nov 22 09:03:50 crc kubenswrapper[4858]: I1122 09:03:50.256754 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jmgt" event={"ID":"9f630f83-63a5-41aa-8a7d-ce6dace404ab","Type":"ContainerDied","Data":"dfd3094554edbfe068768cbc289eb034a72db8e7e6b455329488bdafa7f1b3ae"} Nov 22 09:03:50 crc kubenswrapper[4858]: I1122 09:03:50.260401 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:03:52 crc kubenswrapper[4858]: I1122 09:03:52.278762 4858 generic.go:334] "Generic (PLEG): container finished" podID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerID="b8232aff69f4c017f6164fa221c22f15707859666ca4a2bba5a084190b4f63c8" exitCode=0 Nov 22 09:03:52 crc kubenswrapper[4858]: I1122 09:03:52.279288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jmgt" event={"ID":"9f630f83-63a5-41aa-8a7d-ce6dace404ab","Type":"ContainerDied","Data":"b8232aff69f4c017f6164fa221c22f15707859666ca4a2bba5a084190b4f63c8"} Nov 22 09:03:55 crc kubenswrapper[4858]: I1122 09:03:55.305434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jmgt" event={"ID":"9f630f83-63a5-41aa-8a7d-ce6dace404ab","Type":"ContainerStarted","Data":"2ae87d02f190dee464ef04edfe244a3d23a48aad94ef79a1c4035baf54d7983a"} Nov 22 09:03:55 crc kubenswrapper[4858]: I1122 09:03:55.327455 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4jmgt" podStartSLOduration=3.069757573 podStartE2EDuration="7.327425385s" podCreationTimestamp="2025-11-22 09:03:48 +0000 UTC" firstStartedPulling="2025-11-22 09:03:50.260075118 +0000 UTC m=+6792.101498124" lastFinishedPulling="2025-11-22 09:03:54.51774293 +0000 UTC m=+6796.359165936" observedRunningTime="2025-11-22 09:03:55.324187661 +0000 UTC m=+6797.165610687" watchObservedRunningTime="2025-11-22 09:03:55.327425385 +0000 UTC m=+6797.168848391" Nov 22 09:03:56 crc kubenswrapper[4858]: I1122 09:03:56.536109 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:03:56 crc kubenswrapper[4858]: E1122 09:03:56.536891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:03:58 crc kubenswrapper[4858]: I1122 09:03:58.486818 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:58 crc kubenswrapper[4858]: I1122 09:03:58.486894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:58 crc kubenswrapper[4858]: I1122 09:03:58.534516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:59 crc kubenswrapper[4858]: I1122 09:03:59.393682 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:03:59 crc kubenswrapper[4858]: I1122 09:03:59.443772 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jmgt"] Nov 22 09:04:01 crc kubenswrapper[4858]: I1122 09:04:01.364842 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4jmgt" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="registry-server" containerID="cri-o://2ae87d02f190dee464ef04edfe244a3d23a48aad94ef79a1c4035baf54d7983a" gracePeriod=2 Nov 22 09:04:02 crc kubenswrapper[4858]: I1122 09:04:02.375125 4858 generic.go:334] "Generic (PLEG): container finished" podID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerID="2ae87d02f190dee464ef04edfe244a3d23a48aad94ef79a1c4035baf54d7983a" exitCode=0 Nov 22 09:04:02 crc kubenswrapper[4858]: I1122 09:04:02.375185 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jmgt" event={"ID":"9f630f83-63a5-41aa-8a7d-ce6dace404ab","Type":"ContainerDied","Data":"2ae87d02f190dee464ef04edfe244a3d23a48aad94ef79a1c4035baf54d7983a"} Nov 22 09:04:02 crc kubenswrapper[4858]: I1122 09:04:02.986745 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.044527 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-utilities\") pod \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.044571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-catalog-content\") pod \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.044696 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fbq\" (UniqueName: \"kubernetes.io/projected/9f630f83-63a5-41aa-8a7d-ce6dace404ab-kube-api-access-q8fbq\") pod \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\" (UID: \"9f630f83-63a5-41aa-8a7d-ce6dace404ab\") " Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.045642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-utilities" (OuterVolumeSpecName: "utilities") pod "9f630f83-63a5-41aa-8a7d-ce6dace404ab" (UID: "9f630f83-63a5-41aa-8a7d-ce6dace404ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.052346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f630f83-63a5-41aa-8a7d-ce6dace404ab-kube-api-access-q8fbq" (OuterVolumeSpecName: "kube-api-access-q8fbq") pod "9f630f83-63a5-41aa-8a7d-ce6dace404ab" (UID: "9f630f83-63a5-41aa-8a7d-ce6dace404ab"). InnerVolumeSpecName "kube-api-access-q8fbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.108376 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f630f83-63a5-41aa-8a7d-ce6dace404ab" (UID: "9f630f83-63a5-41aa-8a7d-ce6dace404ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.146672 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8fbq\" (UniqueName: \"kubernetes.io/projected/9f630f83-63a5-41aa-8a7d-ce6dace404ab-kube-api-access-q8fbq\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.146733 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.146749 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f630f83-63a5-41aa-8a7d-ce6dace404ab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.385906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jmgt" event={"ID":"9f630f83-63a5-41aa-8a7d-ce6dace404ab","Type":"ContainerDied","Data":"3590e69dba276347b5feae889c539c69ef031b833f1212988abef1d26dccf1d8"} Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.385991 4858 scope.go:117] "RemoveContainer" containerID="2ae87d02f190dee464ef04edfe244a3d23a48aad94ef79a1c4035baf54d7983a" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.386260 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jmgt" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.415816 4858 scope.go:117] "RemoveContainer" containerID="b8232aff69f4c017f6164fa221c22f15707859666ca4a2bba5a084190b4f63c8" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.436416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jmgt"] Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.454321 4858 scope.go:117] "RemoveContainer" containerID="dfd3094554edbfe068768cbc289eb034a72db8e7e6b455329488bdafa7f1b3ae" Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.460107 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4jmgt"] Nov 22 09:04:03 crc kubenswrapper[4858]: I1122 09:04:03.559702 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" path="/var/lib/kubelet/pods/9f630f83-63a5-41aa-8a7d-ce6dace404ab/volumes" Nov 22 09:04:08 crc kubenswrapper[4858]: I1122 09:04:08.536284 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:04:08 crc kubenswrapper[4858]: E1122 09:04:08.536984 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.705549 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-hnxjz"] Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.711469 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-hnxjz"] Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.829344 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mgrq6"] Nov 22 09:04:11 crc kubenswrapper[4858]: E1122 09:04:11.829713 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="extract-content" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.829731 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="extract-content" Nov 22 09:04:11 crc kubenswrapper[4858]: E1122 09:04:11.829756 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="extract-utilities" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.829768 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="extract-utilities" Nov 22 09:04:11 crc kubenswrapper[4858]: E1122 09:04:11.829792 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="registry-server" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.829800 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="registry-server" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.829955 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f630f83-63a5-41aa-8a7d-ce6dace404ab" containerName="registry-server" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.830519 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.832772 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.832848 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.833234 4858 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-dmxpx" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.833460 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.842678 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mgrq6"] Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.901352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkcnb\" (UniqueName: \"kubernetes.io/projected/07157021-13a3-4e5e-ae41-67afd3beee2a-kube-api-access-hkcnb\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.901754 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07157021-13a3-4e5e-ae41-67afd3beee2a-node-mnt\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:11 crc kubenswrapper[4858]: I1122 09:04:11.901839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07157021-13a3-4e5e-ae41-67afd3beee2a-crc-storage\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.002509 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07157021-13a3-4e5e-ae41-67afd3beee2a-crc-storage\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.002584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkcnb\" (UniqueName: \"kubernetes.io/projected/07157021-13a3-4e5e-ae41-67afd3beee2a-kube-api-access-hkcnb\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.002628 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07157021-13a3-4e5e-ae41-67afd3beee2a-node-mnt\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.002951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07157021-13a3-4e5e-ae41-67afd3beee2a-node-mnt\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.003311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07157021-13a3-4e5e-ae41-67afd3beee2a-crc-storage\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.024704 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkcnb\" (UniqueName: \"kubernetes.io/projected/07157021-13a3-4e5e-ae41-67afd3beee2a-kube-api-access-hkcnb\") pod \"crc-storage-crc-mgrq6\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.155213 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:12 crc kubenswrapper[4858]: I1122 09:04:12.598691 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mgrq6"] Nov 22 09:04:13 crc kubenswrapper[4858]: I1122 09:04:13.469592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mgrq6" event={"ID":"07157021-13a3-4e5e-ae41-67afd3beee2a","Type":"ContainerStarted","Data":"5ff0a96b101e219467e54438f250edd852a92c3b1e59e717ccf74d6ae8ee6b3c"} Nov 22 09:04:13 crc kubenswrapper[4858]: I1122 09:04:13.548546 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3286385-d91f-471f-be8c-9b439311fa51" path="/var/lib/kubelet/pods/a3286385-d91f-471f-be8c-9b439311fa51/volumes" Nov 22 09:04:15 crc kubenswrapper[4858]: I1122 09:04:15.490129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mgrq6" event={"ID":"07157021-13a3-4e5e-ae41-67afd3beee2a","Type":"ContainerStarted","Data":"032cc615925a90436ca327c0e66e7dc900bd185b7377ef4fdc8c17b514a0eb43"} Nov 22 09:04:15 crc kubenswrapper[4858]: I1122 09:04:15.516091 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-mgrq6" podStartSLOduration=3.341067471 podStartE2EDuration="4.516062079s" podCreationTimestamp="2025-11-22 09:04:11 +0000 UTC" firstStartedPulling="2025-11-22 09:04:12.605797171 +0000 UTC m=+6814.447220187" lastFinishedPulling="2025-11-22 09:04:13.780791789 +0000 UTC m=+6815.622214795" observedRunningTime="2025-11-22 09:04:15.509468518 +0000 UTC m=+6817.350891524" watchObservedRunningTime="2025-11-22 09:04:15.516062079 +0000 UTC m=+6817.357485085" Nov 22 09:04:16 crc kubenswrapper[4858]: I1122 09:04:16.500883 4858 generic.go:334] "Generic (PLEG): container finished" podID="07157021-13a3-4e5e-ae41-67afd3beee2a" containerID="032cc615925a90436ca327c0e66e7dc900bd185b7377ef4fdc8c17b514a0eb43" exitCode=0 Nov 22 09:04:16 crc kubenswrapper[4858]: I1122 09:04:16.500978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mgrq6" event={"ID":"07157021-13a3-4e5e-ae41-67afd3beee2a","Type":"ContainerDied","Data":"032cc615925a90436ca327c0e66e7dc900bd185b7377ef4fdc8c17b514a0eb43"} Nov 22 09:04:17 crc kubenswrapper[4858]: I1122 09:04:17.808248 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:17 crc kubenswrapper[4858]: I1122 09:04:17.996603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07157021-13a3-4e5e-ae41-67afd3beee2a-crc-storage\") pod \"07157021-13a3-4e5e-ae41-67afd3beee2a\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " Nov 22 09:04:17 crc kubenswrapper[4858]: I1122 09:04:17.997010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkcnb\" (UniqueName: \"kubernetes.io/projected/07157021-13a3-4e5e-ae41-67afd3beee2a-kube-api-access-hkcnb\") pod \"07157021-13a3-4e5e-ae41-67afd3beee2a\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " Nov 22 09:04:17 crc kubenswrapper[4858]: I1122 09:04:17.997208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07157021-13a3-4e5e-ae41-67afd3beee2a-node-mnt\") pod \"07157021-13a3-4e5e-ae41-67afd3beee2a\" (UID: \"07157021-13a3-4e5e-ae41-67afd3beee2a\") " Nov 22 09:04:17 crc kubenswrapper[4858]: I1122 09:04:17.997369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07157021-13a3-4e5e-ae41-67afd3beee2a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "07157021-13a3-4e5e-ae41-67afd3beee2a" (UID: "07157021-13a3-4e5e-ae41-67afd3beee2a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:04:17 crc kubenswrapper[4858]: I1122 09:04:17.997795 4858 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/07157021-13a3-4e5e-ae41-67afd3beee2a-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.001608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07157021-13a3-4e5e-ae41-67afd3beee2a-kube-api-access-hkcnb" (OuterVolumeSpecName: "kube-api-access-hkcnb") pod "07157021-13a3-4e5e-ae41-67afd3beee2a" (UID: "07157021-13a3-4e5e-ae41-67afd3beee2a"). InnerVolumeSpecName "kube-api-access-hkcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.016827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07157021-13a3-4e5e-ae41-67afd3beee2a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "07157021-13a3-4e5e-ae41-67afd3beee2a" (UID: "07157021-13a3-4e5e-ae41-67afd3beee2a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.099391 4858 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/07157021-13a3-4e5e-ae41-67afd3beee2a-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.099424 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkcnb\" (UniqueName: \"kubernetes.io/projected/07157021-13a3-4e5e-ae41-67afd3beee2a-kube-api-access-hkcnb\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.525297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mgrq6" event={"ID":"07157021-13a3-4e5e-ae41-67afd3beee2a","Type":"ContainerDied","Data":"5ff0a96b101e219467e54438f250edd852a92c3b1e59e717ccf74d6ae8ee6b3c"} Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.525407 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ff0a96b101e219467e54438f250edd852a92c3b1e59e717ccf74d6ae8ee6b3c" Nov 22 09:04:18 crc kubenswrapper[4858]: I1122 09:04:18.525392 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mgrq6" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.540386 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:04:19 crc kubenswrapper[4858]: E1122 09:04:19.541039 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.734028 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-mgrq6"] Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.739521 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-mgrq6"] Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.870436 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-chgw5"] Nov 22 09:04:19 crc kubenswrapper[4858]: E1122 09:04:19.871056 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07157021-13a3-4e5e-ae41-67afd3beee2a" containerName="storage" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.871127 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="07157021-13a3-4e5e-ae41-67afd3beee2a" containerName="storage" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.871484 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="07157021-13a3-4e5e-ae41-67afd3beee2a" containerName="storage" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.872413 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.875535 4858 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-dmxpx" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.879049 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.879161 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-chgw5"] Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.879374 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.879524 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.927107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4930ac35-0fb6-4c67-82b1-d4a11bd21320-node-mnt\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.927147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhj2\" (UniqueName: \"kubernetes.io/projected/4930ac35-0fb6-4c67-82b1-d4a11bd21320-kube-api-access-6qhj2\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:19 crc kubenswrapper[4858]: I1122 09:04:19.927222 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4930ac35-0fb6-4c67-82b1-d4a11bd21320-crc-storage\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.029085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4930ac35-0fb6-4c67-82b1-d4a11bd21320-node-mnt\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.029152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qhj2\" (UniqueName: \"kubernetes.io/projected/4930ac35-0fb6-4c67-82b1-d4a11bd21320-kube-api-access-6qhj2\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.029204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4930ac35-0fb6-4c67-82b1-d4a11bd21320-crc-storage\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.029868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4930ac35-0fb6-4c67-82b1-d4a11bd21320-node-mnt\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.030127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4930ac35-0fb6-4c67-82b1-d4a11bd21320-crc-storage\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.056144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qhj2\" (UniqueName: \"kubernetes.io/projected/4930ac35-0fb6-4c67-82b1-d4a11bd21320-kube-api-access-6qhj2\") pod \"crc-storage-crc-chgw5\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.194670 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:20 crc kubenswrapper[4858]: I1122 09:04:20.616609 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-chgw5"] Nov 22 09:04:21 crc kubenswrapper[4858]: I1122 09:04:21.548147 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07157021-13a3-4e5e-ae41-67afd3beee2a" path="/var/lib/kubelet/pods/07157021-13a3-4e5e-ae41-67afd3beee2a/volumes" Nov 22 09:04:21 crc kubenswrapper[4858]: I1122 09:04:21.555938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-chgw5" event={"ID":"4930ac35-0fb6-4c67-82b1-d4a11bd21320","Type":"ContainerStarted","Data":"02b1659b764b605a06c39260e0e19515bbccff7a434a9db0039b31e32c0531b5"} Nov 22 09:04:23 crc kubenswrapper[4858]: I1122 09:04:23.593110 4858 generic.go:334] "Generic (PLEG): container finished" podID="4930ac35-0fb6-4c67-82b1-d4a11bd21320" containerID="67d204318d421a8357c796d9667a7a40b72350ec3a1438627b8901d426d99f28" exitCode=0 Nov 22 09:04:23 crc kubenswrapper[4858]: I1122 09:04:23.593180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-chgw5" event={"ID":"4930ac35-0fb6-4c67-82b1-d4a11bd21320","Type":"ContainerDied","Data":"67d204318d421a8357c796d9667a7a40b72350ec3a1438627b8901d426d99f28"} Nov 22 09:04:24 crc kubenswrapper[4858]: I1122 09:04:24.916668 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.019574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qhj2\" (UniqueName: \"kubernetes.io/projected/4930ac35-0fb6-4c67-82b1-d4a11bd21320-kube-api-access-6qhj2\") pod \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.019645 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4930ac35-0fb6-4c67-82b1-d4a11bd21320-node-mnt\") pod \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.019672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4930ac35-0fb6-4c67-82b1-d4a11bd21320-crc-storage\") pod \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\" (UID: \"4930ac35-0fb6-4c67-82b1-d4a11bd21320\") " Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.020203 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4930ac35-0fb6-4c67-82b1-d4a11bd21320-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4930ac35-0fb6-4c67-82b1-d4a11bd21320" (UID: "4930ac35-0fb6-4c67-82b1-d4a11bd21320"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.027080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4930ac35-0fb6-4c67-82b1-d4a11bd21320-kube-api-access-6qhj2" (OuterVolumeSpecName: "kube-api-access-6qhj2") pod "4930ac35-0fb6-4c67-82b1-d4a11bd21320" (UID: "4930ac35-0fb6-4c67-82b1-d4a11bd21320"). InnerVolumeSpecName "kube-api-access-6qhj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.042010 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4930ac35-0fb6-4c67-82b1-d4a11bd21320-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4930ac35-0fb6-4c67-82b1-d4a11bd21320" (UID: "4930ac35-0fb6-4c67-82b1-d4a11bd21320"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.120979 4858 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4930ac35-0fb6-4c67-82b1-d4a11bd21320-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.121009 4858 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4930ac35-0fb6-4c67-82b1-d4a11bd21320-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.121021 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qhj2\" (UniqueName: \"kubernetes.io/projected/4930ac35-0fb6-4c67-82b1-d4a11bd21320-kube-api-access-6qhj2\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.610254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-chgw5" event={"ID":"4930ac35-0fb6-4c67-82b1-d4a11bd21320","Type":"ContainerDied","Data":"02b1659b764b605a06c39260e0e19515bbccff7a434a9db0039b31e32c0531b5"} Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.610297 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-chgw5" Nov 22 09:04:25 crc kubenswrapper[4858]: I1122 09:04:25.610300 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b1659b764b605a06c39260e0e19515bbccff7a434a9db0039b31e32c0531b5" Nov 22 09:04:33 crc kubenswrapper[4858]: I1122 09:04:33.325867 4858 scope.go:117] "RemoveContainer" containerID="dd27dcd3b7ce6d59c9ef85714b0507446c6f31a57249203660fa91083e5f9df3" Nov 22 09:04:33 crc kubenswrapper[4858]: I1122 09:04:33.536358 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:04:33 crc kubenswrapper[4858]: E1122 09:04:33.536708 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:04:47 crc kubenswrapper[4858]: I1122 09:04:47.536156 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:04:47 crc kubenswrapper[4858]: I1122 09:04:47.801428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"d3a5f20f89a78131083b1d0f65c8f1a491a734a067a67b704012c5563057b454"} Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.873627 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lmnss"] Nov 22 09:04:59 crc kubenswrapper[4858]: E1122 09:04:59.874576 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4930ac35-0fb6-4c67-82b1-d4a11bd21320" containerName="storage" Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.874595 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4930ac35-0fb6-4c67-82b1-d4a11bd21320" containerName="storage" Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.874811 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4930ac35-0fb6-4c67-82b1-d4a11bd21320" containerName="storage" Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.876101 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.892884 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmnss"] Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.900037 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-utilities\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.900224 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-catalog-content\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:04:59 crc kubenswrapper[4858]: I1122 09:04:59.900257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7fwc\" (UniqueName: \"kubernetes.io/projected/85beefae-0d7e-457e-868a-efd023ed44d4-kube-api-access-b7fwc\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.001402 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-utilities\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.001496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7fwc\" (UniqueName: \"kubernetes.io/projected/85beefae-0d7e-457e-868a-efd023ed44d4-kube-api-access-b7fwc\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.001516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-catalog-content\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.001993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-catalog-content\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.001996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-utilities\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.024953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7fwc\" (UniqueName: \"kubernetes.io/projected/85beefae-0d7e-457e-868a-efd023ed44d4-kube-api-access-b7fwc\") pod \"redhat-marketplace-lmnss\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.213707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.702424 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmnss"] Nov 22 09:05:00 crc kubenswrapper[4858]: W1122 09:05:00.713415 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85beefae_0d7e_457e_868a_efd023ed44d4.slice/crio-74a613391edf429032d16f3f4dd83e9161d386e59d9b6fbe54ba42769b8784e9 WatchSource:0}: Error finding container 74a613391edf429032d16f3f4dd83e9161d386e59d9b6fbe54ba42769b8784e9: Status 404 returned error can't find the container with id 74a613391edf429032d16f3f4dd83e9161d386e59d9b6fbe54ba42769b8784e9 Nov 22 09:05:00 crc kubenswrapper[4858]: I1122 09:05:00.912889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmnss" event={"ID":"85beefae-0d7e-457e-868a-efd023ed44d4","Type":"ContainerStarted","Data":"74a613391edf429032d16f3f4dd83e9161d386e59d9b6fbe54ba42769b8784e9"} Nov 22 09:05:01 crc kubenswrapper[4858]: I1122 09:05:01.923257 4858 generic.go:334] "Generic (PLEG): container finished" podID="85beefae-0d7e-457e-868a-efd023ed44d4" containerID="e01d77ab7454fc86512385d1537f266faba7aff8f5819bdb192128bddd083ada" exitCode=0 Nov 22 09:05:01 crc kubenswrapper[4858]: I1122 09:05:01.923380 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmnss" event={"ID":"85beefae-0d7e-457e-868a-efd023ed44d4","Type":"ContainerDied","Data":"e01d77ab7454fc86512385d1537f266faba7aff8f5819bdb192128bddd083ada"} Nov 22 09:05:04 crc kubenswrapper[4858]: I1122 09:05:04.949205 4858 generic.go:334] "Generic (PLEG): container finished" podID="85beefae-0d7e-457e-868a-efd023ed44d4" containerID="dee622070b5a4a0d32f089dbdddb0ef88ee2cb854aa320ef0315952bd6e9ff4e" exitCode=0 Nov 22 09:05:04 crc kubenswrapper[4858]: I1122 09:05:04.949338 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmnss" event={"ID":"85beefae-0d7e-457e-868a-efd023ed44d4","Type":"ContainerDied","Data":"dee622070b5a4a0d32f089dbdddb0ef88ee2cb854aa320ef0315952bd6e9ff4e"} Nov 22 09:05:06 crc kubenswrapper[4858]: I1122 09:05:06.965717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmnss" event={"ID":"85beefae-0d7e-457e-868a-efd023ed44d4","Type":"ContainerStarted","Data":"64cc09b78f44d8821336dc8dd392d115f9d7609cfa9e9c281043c030f4f8335a"} Nov 22 09:05:07 crc kubenswrapper[4858]: I1122 09:05:07.003297 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lmnss" podStartSLOduration=3.630027562 podStartE2EDuration="8.003270753s" podCreationTimestamp="2025-11-22 09:04:59 +0000 UTC" firstStartedPulling="2025-11-22 09:05:01.925249245 +0000 UTC m=+6863.766672241" lastFinishedPulling="2025-11-22 09:05:06.298492426 +0000 UTC m=+6868.139915432" observedRunningTime="2025-11-22 09:05:07.000454703 +0000 UTC m=+6868.841877739" watchObservedRunningTime="2025-11-22 09:05:07.003270753 +0000 UTC m=+6868.844693769" Nov 22 09:05:10 crc kubenswrapper[4858]: I1122 09:05:10.214495 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:10 crc kubenswrapper[4858]: I1122 09:05:10.214889 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:10 crc kubenswrapper[4858]: I1122 09:05:10.266676 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:11 crc kubenswrapper[4858]: I1122 09:05:11.059776 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:11 crc kubenswrapper[4858]: I1122 09:05:11.659442 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmnss"] Nov 22 09:05:13 crc kubenswrapper[4858]: I1122 09:05:13.026676 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lmnss" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="registry-server" containerID="cri-o://64cc09b78f44d8821336dc8dd392d115f9d7609cfa9e9c281043c030f4f8335a" gracePeriod=2 Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.036937 4858 generic.go:334] "Generic (PLEG): container finished" podID="85beefae-0d7e-457e-868a-efd023ed44d4" containerID="64cc09b78f44d8821336dc8dd392d115f9d7609cfa9e9c281043c030f4f8335a" exitCode=0 Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.037024 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmnss" event={"ID":"85beefae-0d7e-457e-868a-efd023ed44d4","Type":"ContainerDied","Data":"64cc09b78f44d8821336dc8dd392d115f9d7609cfa9e9c281043c030f4f8335a"} Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.037507 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lmnss" event={"ID":"85beefae-0d7e-457e-868a-efd023ed44d4","Type":"ContainerDied","Data":"74a613391edf429032d16f3f4dd83e9161d386e59d9b6fbe54ba42769b8784e9"} Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.037531 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a613391edf429032d16f3f4dd83e9161d386e59d9b6fbe54ba42769b8784e9" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.051134 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.226451 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-catalog-content\") pod \"85beefae-0d7e-457e-868a-efd023ed44d4\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.226575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-utilities\") pod \"85beefae-0d7e-457e-868a-efd023ed44d4\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.226624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7fwc\" (UniqueName: \"kubernetes.io/projected/85beefae-0d7e-457e-868a-efd023ed44d4-kube-api-access-b7fwc\") pod \"85beefae-0d7e-457e-868a-efd023ed44d4\" (UID: \"85beefae-0d7e-457e-868a-efd023ed44d4\") " Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.228807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-utilities" (OuterVolumeSpecName: "utilities") pod "85beefae-0d7e-457e-868a-efd023ed44d4" (UID: "85beefae-0d7e-457e-868a-efd023ed44d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.232281 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85beefae-0d7e-457e-868a-efd023ed44d4-kube-api-access-b7fwc" (OuterVolumeSpecName: "kube-api-access-b7fwc") pod "85beefae-0d7e-457e-868a-efd023ed44d4" (UID: "85beefae-0d7e-457e-868a-efd023ed44d4"). InnerVolumeSpecName "kube-api-access-b7fwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.249859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85beefae-0d7e-457e-868a-efd023ed44d4" (UID: "85beefae-0d7e-457e-868a-efd023ed44d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.328602 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.328646 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7fwc\" (UniqueName: \"kubernetes.io/projected/85beefae-0d7e-457e-868a-efd023ed44d4-kube-api-access-b7fwc\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:14 crc kubenswrapper[4858]: I1122 09:05:14.328660 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85beefae-0d7e-457e-868a-efd023ed44d4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:15 crc kubenswrapper[4858]: I1122 09:05:15.044350 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lmnss" Nov 22 09:05:15 crc kubenswrapper[4858]: I1122 09:05:15.083076 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmnss"] Nov 22 09:05:15 crc kubenswrapper[4858]: I1122 09:05:15.091286 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lmnss"] Nov 22 09:05:15 crc kubenswrapper[4858]: I1122 09:05:15.546500 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" path="/var/lib/kubelet/pods/85beefae-0d7e-457e-868a-efd023ed44d4/volumes" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.795462 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlc5w"] Nov 22 09:05:20 crc kubenswrapper[4858]: E1122 09:05:20.796487 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="extract-utilities" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.796504 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="extract-utilities" Nov 22 09:05:20 crc kubenswrapper[4858]: E1122 09:05:20.796532 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="extract-content" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.796541 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="extract-content" Nov 22 09:05:20 crc kubenswrapper[4858]: E1122 09:05:20.796560 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="registry-server" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.796568 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="registry-server" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.796765 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="85beefae-0d7e-457e-868a-efd023ed44d4" containerName="registry-server" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.799093 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.818506 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlc5w"] Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.971688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th85s\" (UniqueName: \"kubernetes.io/projected/8382857d-4db8-4338-aff2-89e8afa9aaeb-kube-api-access-th85s\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.972274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-catalog-content\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:20 crc kubenswrapper[4858]: I1122 09:05:20.972448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-utilities\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.073467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-catalog-content\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.073539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-utilities\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.073616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th85s\" (UniqueName: \"kubernetes.io/projected/8382857d-4db8-4338-aff2-89e8afa9aaeb-kube-api-access-th85s\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.074311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-catalog-content\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.074432 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-utilities\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.098453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th85s\" (UniqueName: \"kubernetes.io/projected/8382857d-4db8-4338-aff2-89e8afa9aaeb-kube-api-access-th85s\") pod \"community-operators-vlc5w\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.171967 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:21 crc kubenswrapper[4858]: I1122 09:05:21.484578 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlc5w"] Nov 22 09:05:22 crc kubenswrapper[4858]: I1122 09:05:22.107452 4858 generic.go:334] "Generic (PLEG): container finished" podID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerID="bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd" exitCode=0 Nov 22 09:05:22 crc kubenswrapper[4858]: I1122 09:05:22.107526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlc5w" event={"ID":"8382857d-4db8-4338-aff2-89e8afa9aaeb","Type":"ContainerDied","Data":"bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd"} Nov 22 09:05:22 crc kubenswrapper[4858]: I1122 09:05:22.108540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlc5w" event={"ID":"8382857d-4db8-4338-aff2-89e8afa9aaeb","Type":"ContainerStarted","Data":"a2f0397cc3d0b269f42b1a527a05dfc7ef6193cbb9bf0466ed7ee40cf60da255"} Nov 22 09:05:24 crc kubenswrapper[4858]: I1122 09:05:24.129185 4858 generic.go:334] "Generic (PLEG): container finished" podID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerID="eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b" exitCode=0 Nov 22 09:05:24 crc kubenswrapper[4858]: I1122 09:05:24.129281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlc5w" event={"ID":"8382857d-4db8-4338-aff2-89e8afa9aaeb","Type":"ContainerDied","Data":"eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b"} Nov 22 09:05:25 crc kubenswrapper[4858]: I1122 09:05:25.143477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlc5w" event={"ID":"8382857d-4db8-4338-aff2-89e8afa9aaeb","Type":"ContainerStarted","Data":"4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe"} Nov 22 09:05:25 crc kubenswrapper[4858]: I1122 09:05:25.167619 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlc5w" podStartSLOduration=2.641262287 podStartE2EDuration="5.167592745s" podCreationTimestamp="2025-11-22 09:05:20 +0000 UTC" firstStartedPulling="2025-11-22 09:05:22.110460827 +0000 UTC m=+6883.951883833" lastFinishedPulling="2025-11-22 09:05:24.636791285 +0000 UTC m=+6886.478214291" observedRunningTime="2025-11-22 09:05:25.165144736 +0000 UTC m=+6887.006567762" watchObservedRunningTime="2025-11-22 09:05:25.167592745 +0000 UTC m=+6887.009015751" Nov 22 09:05:31 crc kubenswrapper[4858]: I1122 09:05:31.172809 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:31 crc kubenswrapper[4858]: I1122 09:05:31.174631 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:31 crc kubenswrapper[4858]: I1122 09:05:31.239202 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:31 crc kubenswrapper[4858]: I1122 09:05:31.281339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:31 crc kubenswrapper[4858]: I1122 09:05:31.474674 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlc5w"] Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.241765 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlc5w" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="registry-server" containerID="cri-o://4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe" gracePeriod=2 Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.744228 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.879447 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-catalog-content\") pod \"8382857d-4db8-4338-aff2-89e8afa9aaeb\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.879576 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th85s\" (UniqueName: \"kubernetes.io/projected/8382857d-4db8-4338-aff2-89e8afa9aaeb-kube-api-access-th85s\") pod \"8382857d-4db8-4338-aff2-89e8afa9aaeb\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.879639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-utilities\") pod \"8382857d-4db8-4338-aff2-89e8afa9aaeb\" (UID: \"8382857d-4db8-4338-aff2-89e8afa9aaeb\") " Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.880780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-utilities" (OuterVolumeSpecName: "utilities") pod "8382857d-4db8-4338-aff2-89e8afa9aaeb" (UID: "8382857d-4db8-4338-aff2-89e8afa9aaeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.888836 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8382857d-4db8-4338-aff2-89e8afa9aaeb-kube-api-access-th85s" (OuterVolumeSpecName: "kube-api-access-th85s") pod "8382857d-4db8-4338-aff2-89e8afa9aaeb" (UID: "8382857d-4db8-4338-aff2-89e8afa9aaeb"). InnerVolumeSpecName "kube-api-access-th85s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.981949 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:33 crc kubenswrapper[4858]: I1122 09:05:33.982438 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th85s\" (UniqueName: \"kubernetes.io/projected/8382857d-4db8-4338-aff2-89e8afa9aaeb-kube-api-access-th85s\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.253173 4858 generic.go:334] "Generic (PLEG): container finished" podID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerID="4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe" exitCode=0 Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.253254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlc5w" event={"ID":"8382857d-4db8-4338-aff2-89e8afa9aaeb","Type":"ContainerDied","Data":"4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe"} Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.253296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlc5w" event={"ID":"8382857d-4db8-4338-aff2-89e8afa9aaeb","Type":"ContainerDied","Data":"a2f0397cc3d0b269f42b1a527a05dfc7ef6193cbb9bf0466ed7ee40cf60da255"} Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.253341 4858 scope.go:117] "RemoveContainer" containerID="4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.253480 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlc5w" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.277515 4858 scope.go:117] "RemoveContainer" containerID="eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.301625 4858 scope.go:117] "RemoveContainer" containerID="bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.328011 4858 scope.go:117] "RemoveContainer" containerID="4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe" Nov 22 09:05:34 crc kubenswrapper[4858]: E1122 09:05:34.328508 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe\": container with ID starting with 4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe not found: ID does not exist" containerID="4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.328588 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe"} err="failed to get container status \"4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe\": rpc error: code = NotFound desc = could not find container \"4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe\": container with ID starting with 4d9facd7ff90fa39c286d1961e20d2dace1b2ff31eef63af0ef4ab7291663bfe not found: ID does not exist" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.328625 4858 scope.go:117] "RemoveContainer" containerID="eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b" Nov 22 09:05:34 crc kubenswrapper[4858]: E1122 09:05:34.329285 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b\": container with ID starting with eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b not found: ID does not exist" containerID="eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.329457 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b"} err="failed to get container status \"eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b\": rpc error: code = NotFound desc = could not find container \"eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b\": container with ID starting with eb8e2e80f48ae774946a0874df26a5d4504d681d81fde9aefa203a24bcb7990b not found: ID does not exist" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.329584 4858 scope.go:117] "RemoveContainer" containerID="bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd" Nov 22 09:05:34 crc kubenswrapper[4858]: E1122 09:05:34.330075 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd\": container with ID starting with bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd not found: ID does not exist" containerID="bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.330116 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd"} err="failed to get container status \"bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd\": rpc error: code = NotFound desc = could not find container \"bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd\": container with ID starting with bfc384951d759302a4d5e855d937a7d5ef92c383f2a35e3f7d9c0e4a11a8dadd not found: ID does not exist" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.824814 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8382857d-4db8-4338-aff2-89e8afa9aaeb" (UID: "8382857d-4db8-4338-aff2-89e8afa9aaeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.897635 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8382857d-4db8-4338-aff2-89e8afa9aaeb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.897908 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlc5w"] Nov 22 09:05:34 crc kubenswrapper[4858]: I1122 09:05:34.907333 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlc5w"] Nov 22 09:05:35 crc kubenswrapper[4858]: I1122 09:05:35.545751 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" path="/var/lib/kubelet/pods/8382857d-4db8-4338-aff2-89e8afa9aaeb/volumes" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.165299 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-4wvz8"] Nov 22 09:06:41 crc kubenswrapper[4858]: E1122 09:06:41.166769 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="extract-utilities" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.166791 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="extract-utilities" Nov 22 09:06:41 crc kubenswrapper[4858]: E1122 09:06:41.166810 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.166818 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4858]: E1122 09:06:41.166835 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="extract-content" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.166844 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="extract-content" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.167056 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8382857d-4db8-4338-aff2-89e8afa9aaeb" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.168251 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.170814 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zmr69" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.173196 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.173366 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.173443 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.173925 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-4wvz8"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.205923 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-cq6v2"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.208159 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.210298 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.229607 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcpw\" (UniqueName: \"kubernetes.io/projected/60f0782a-2b88-48ce-99d2-9245d1140a02-kube-api-access-zwcpw\") pod \"dnsmasq-dns-6bbc85cdbf-4wvz8\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.229730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f0782a-2b88-48ce-99d2-9245d1140a02-config\") pod \"dnsmasq-dns-6bbc85cdbf-4wvz8\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.230833 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-cq6v2"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.331418 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcpw\" (UniqueName: \"kubernetes.io/projected/60f0782a-2b88-48ce-99d2-9245d1140a02-kube-api-access-zwcpw\") pod \"dnsmasq-dns-6bbc85cdbf-4wvz8\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.331514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl94h\" (UniqueName: \"kubernetes.io/projected/8acc1758-2810-44a3-88c3-042466670b3b-kube-api-access-gl94h\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.331536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f0782a-2b88-48ce-99d2-9245d1140a02-config\") pod \"dnsmasq-dns-6bbc85cdbf-4wvz8\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.331556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-dns-svc\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.331581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-config\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.332842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f0782a-2b88-48ce-99d2-9245d1140a02-config\") pod \"dnsmasq-dns-6bbc85cdbf-4wvz8\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.353765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcpw\" (UniqueName: \"kubernetes.io/projected/60f0782a-2b88-48ce-99d2-9245d1140a02-kube-api-access-zwcpw\") pod \"dnsmasq-dns-6bbc85cdbf-4wvz8\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.432760 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl94h\" (UniqueName: \"kubernetes.io/projected/8acc1758-2810-44a3-88c3-042466670b3b-kube-api-access-gl94h\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.432823 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-dns-svc\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.432867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-config\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.433952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-dns-svc\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.434114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-config\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.462099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl94h\" (UniqueName: \"kubernetes.io/projected/8acc1758-2810-44a3-88c3-042466670b3b-kube-api-access-gl94h\") pod \"dnsmasq-dns-7c4878bb99-cq6v2\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.494503 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.525925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.532988 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-4wvz8"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.568223 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5d89b87f-xmbcw"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.570139 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.587646 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5d89b87f-xmbcw"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.636226 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-dns-svc\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.637604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns7t7\" (UniqueName: \"kubernetes.io/projected/aa53f6b5-1640-4261-9800-219d00653b49-kube-api-access-ns7t7\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.637923 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-config\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.745136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns7t7\" (UniqueName: \"kubernetes.io/projected/aa53f6b5-1640-4261-9800-219d00653b49-kube-api-access-ns7t7\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.745276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-config\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.745311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-dns-svc\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.746645 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-config\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.746719 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-dns-svc\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.771014 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns7t7\" (UniqueName: \"kubernetes.io/projected/aa53f6b5-1640-4261-9800-219d00653b49-kube-api-access-ns7t7\") pod \"dnsmasq-dns-5c5d89b87f-xmbcw\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.878779 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-4wvz8"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.917168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" event={"ID":"60f0782a-2b88-48ce-99d2-9245d1140a02","Type":"ContainerStarted","Data":"2e272d7c9fd751a9aaa2887d8fbb0432cace8b45b6095b2654b5d2844b46801c"} Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.936603 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.945491 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5d89b87f-xmbcw"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.968432 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-l4945"] Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.970453 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:41 crc kubenswrapper[4858]: I1122 09:06:41.985054 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-l4945"] Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.020855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-cq6v2"] Nov 22 09:06:42 crc kubenswrapper[4858]: W1122 09:06:42.034237 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8acc1758_2810_44a3_88c3_042466670b3b.slice/crio-0f28dce87fe57ad148e6ed4915e83a1c4a638122003ee8c572f589b84ad45147 WatchSource:0}: Error finding container 0f28dce87fe57ad148e6ed4915e83a1c4a638122003ee8c572f589b84ad45147: Status 404 returned error can't find the container with id 0f28dce87fe57ad148e6ed4915e83a1c4a638122003ee8c572f589b84ad45147 Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.056007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-dns-svc\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.056097 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-config\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.056154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hrt\" (UniqueName: \"kubernetes.io/projected/05250187-2f7f-4ae2-b003-7bc56e49c9ea-kube-api-access-69hrt\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.157110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hrt\" (UniqueName: \"kubernetes.io/projected/05250187-2f7f-4ae2-b003-7bc56e49c9ea-kube-api-access-69hrt\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.157465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-dns-svc\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.157498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-config\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.158246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-config\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.158357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-dns-svc\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.176634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hrt\" (UniqueName: \"kubernetes.io/projected/05250187-2f7f-4ae2-b003-7bc56e49c9ea-kube-api-access-69hrt\") pod \"dnsmasq-dns-574cff9d7f-l4945\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.317201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.499004 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5d89b87f-xmbcw"] Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.756522 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.758625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.764822 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.765017 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.765170 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.765345 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4dqbs" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.765436 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.765553 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.765615 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.792730 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.876999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9lz9\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-kube-api-access-s9lz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877268 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877555 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877585 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.877672 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.928939 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" event={"ID":"8acc1758-2810-44a3-88c3-042466670b3b","Type":"ContainerStarted","Data":"0f28dce87fe57ad148e6ed4915e83a1c4a638122003ee8c572f589b84ad45147"} Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.935236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" event={"ID":"aa53f6b5-1640-4261-9800-219d00653b49","Type":"ContainerStarted","Data":"aa4c42a62a739c77c3be29273ba0ddee3f6f2d679da20acf787f51b7098d0ad5"} Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.959612 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-l4945"] Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979797 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9lz9\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-kube-api-access-s9lz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.979942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.981165 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.982127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.982204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.982148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.984629 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.986506 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.986647 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ab47d8670df5e82a50528381189c1984e72821a7c18b99b9894f87a7daf2012d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.987096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.987168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.987422 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.988372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:42 crc kubenswrapper[4858]: I1122 09:06:42.998434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9lz9\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-kube-api-access-s9lz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.048659 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.078858 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.080534 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.084986 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.085546 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.085900 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.086161 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.086526 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.086888 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bzc8j" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.087138 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.103723 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.106963 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-config-data\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czmf\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-kube-api-access-2czmf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191607 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191672 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d02b1a-c6cf-4409-938b-ab57a76cb248-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d02b1a-c6cf-4409-938b-ab57a76cb248-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.191779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293519 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-config-data\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293783 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czmf\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-kube-api-access-2czmf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d02b1a-c6cf-4409-938b-ab57a76cb248-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.293962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.294009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d02b1a-c6cf-4409-938b-ab57a76cb248-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.294483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.294857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.294904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.297534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.299591 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.299652 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8c0cb2414112c3f79a17a93c397588a9527ebe61840fd9b61167ee318ce72b09/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.301051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-config-data\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.302829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d02b1a-c6cf-4409-938b-ab57a76cb248-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.303570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.305893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.306057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d02b1a-c6cf-4409-938b-ab57a76cb248-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.314202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czmf\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-kube-api-access-2czmf\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.329917 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.453891 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.502471 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.503824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.512817 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.512991 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xjdh8" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.513123 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.513509 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.522486 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.529532 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.607904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398c6958-f902-4b59-9afd-0275dea7251d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.608382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.608445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5zvv\" (UniqueName: \"kubernetes.io/projected/398c6958-f902-4b59-9afd-0275dea7251d-kube-api-access-q5zvv\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.608472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.608497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.613378 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.613567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-config-data-default\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.613705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-kolla-config\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.646402 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:06:43 crc kubenswrapper[4858]: W1122 09:06:43.669792 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e056a1_9fa1_482f_9c3a_42f2b45ce40b.slice/crio-1263782ee5804a6c5d1bb4e2aa564ec569fa770029ae7a895c955a0680e7e34f WatchSource:0}: Error finding container 1263782ee5804a6c5d1bb4e2aa564ec569fa770029ae7a895c955a0680e7e34f: Status 404 returned error can't find the container with id 1263782ee5804a6c5d1bb4e2aa564ec569fa770029ae7a895c955a0680e7e34f Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715493 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-config-data-default\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-kolla-config\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398c6958-f902-4b59-9afd-0275dea7251d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.715794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5zvv\" (UniqueName: \"kubernetes.io/projected/398c6958-f902-4b59-9afd-0275dea7251d-kube-api-access-q5zvv\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.717073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-kolla-config\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.717080 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398c6958-f902-4b59-9afd-0275dea7251d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.717929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-config-data-default\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.722205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.723826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.724244 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.724279 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a6ad3057fd073140a0b133ba6948d49bdfc30bde43537a25d062907e0bbbccf/globalmount\"" pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.727543 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.733152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5zvv\" (UniqueName: \"kubernetes.io/projected/398c6958-f902-4b59-9afd-0275dea7251d-kube-api-access-q5zvv\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.760665 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") pod \"openstack-galera-0\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.836709 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.911310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.968806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" event={"ID":"05250187-2f7f-4ae2-b003-7bc56e49c9ea","Type":"ContainerStarted","Data":"34fee597f3cc6ff701f94bf70974dc496709573b1590615048fe4db5fda122d7"} Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.972514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d02b1a-c6cf-4409-938b-ab57a76cb248","Type":"ContainerStarted","Data":"fd3e80e90c7160b409c0f6c14177a03e25895fd98bfffccf6993e3adad0a969e"} Nov 22 09:06:43 crc kubenswrapper[4858]: I1122 09:06:43.975002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17e056a1-9fa1-482f-9c3a-42f2b45ce40b","Type":"ContainerStarted","Data":"1263782ee5804a6c5d1bb4e2aa564ec569fa770029ae7a895c955a0680e7e34f"} Nov 22 09:06:44 crc kubenswrapper[4858]: I1122 09:06:44.232030 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 09:06:44 crc kubenswrapper[4858]: I1122 09:06:44.998552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398c6958-f902-4b59-9afd-0275dea7251d","Type":"ContainerStarted","Data":"2a4d9a98a563fe256ec2c478d9c2a855290360e126d5e3cc7112649d1622a622"} Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.212548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.213974 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.215760 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.218060 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-56cnc" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.220112 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.220269 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.225892 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.351435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-64561564-f9a1-481b-8d85-edbea98f10b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.351799 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.351820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.351851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.354468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.355123 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.355152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmdz\" (UniqueName: \"kubernetes.io/projected/11074703-ddac-49f9-b53d-5ec6c721af7d-kube-api-access-2rmdz\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.355420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmdz\" (UniqueName: \"kubernetes.io/projected/11074703-ddac-49f9-b53d-5ec6c721af7d-kube-api-access-2rmdz\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-64561564-f9a1-481b-8d85-edbea98f10b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.457382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.458606 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.458682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.460261 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.460722 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.460768 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-64561564-f9a1-481b-8d85-edbea98f10b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f6ea6f40fe0e8d0bc4a5bc7f8e4ee44c74f3863ef1adb49c51dc1654dcfe4702/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.466092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.466461 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.469723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.478029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmdz\" (UniqueName: \"kubernetes.io/projected/11074703-ddac-49f9-b53d-5ec6c721af7d-kube-api-access-2rmdz\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.501886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-64561564-f9a1-481b-8d85-edbea98f10b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") pod \"openstack-cell1-galera-0\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.553626 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.586510 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.628632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.628756 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.640514 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.641294 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.641300 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tj4n7" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.765926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-config-data\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.765994 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.766097 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvntz\" (UniqueName: \"kubernetes.io/projected/b4271125-14af-4748-97ad-ed766b2d26b8-kube-api-access-mvntz\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.766153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.766177 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-kolla-config\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.867378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvntz\" (UniqueName: \"kubernetes.io/projected/b4271125-14af-4748-97ad-ed766b2d26b8-kube-api-access-mvntz\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.867489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-kolla-config\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.867514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.867543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-config-data\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.867569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.870611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-kolla-config\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.870716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-config-data\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.877064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.880206 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.889781 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvntz\" (UniqueName: \"kubernetes.io/projected/b4271125-14af-4748-97ad-ed766b2d26b8-kube-api-access-mvntz\") pod \"memcached-0\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " pod="openstack/memcached-0" Nov 22 09:06:45 crc kubenswrapper[4858]: I1122 09:06:45.968634 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 09:06:46 crc kubenswrapper[4858]: I1122 09:06:46.251972 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 09:06:46 crc kubenswrapper[4858]: W1122 09:06:46.261492 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11074703_ddac_49f9_b53d_5ec6c721af7d.slice/crio-e7fed850d1b081ef4e940dc9b604aca39156f85ad5537de59f52bb8bf89da8c6 WatchSource:0}: Error finding container e7fed850d1b081ef4e940dc9b604aca39156f85ad5537de59f52bb8bf89da8c6: Status 404 returned error can't find the container with id e7fed850d1b081ef4e940dc9b604aca39156f85ad5537de59f52bb8bf89da8c6 Nov 22 09:06:46 crc kubenswrapper[4858]: I1122 09:06:46.425581 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 09:06:46 crc kubenswrapper[4858]: W1122 09:06:46.446120 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4271125_14af_4748_97ad_ed766b2d26b8.slice/crio-c222d0459fde1fa7c611a3b2d3152ef7a33b380e390942f5ad6ad138ec2d0ab9 WatchSource:0}: Error finding container c222d0459fde1fa7c611a3b2d3152ef7a33b380e390942f5ad6ad138ec2d0ab9: Status 404 returned error can't find the container with id c222d0459fde1fa7c611a3b2d3152ef7a33b380e390942f5ad6ad138ec2d0ab9 Nov 22 09:06:47 crc kubenswrapper[4858]: I1122 09:06:47.046146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11074703-ddac-49f9-b53d-5ec6c721af7d","Type":"ContainerStarted","Data":"e7fed850d1b081ef4e940dc9b604aca39156f85ad5537de59f52bb8bf89da8c6"} Nov 22 09:06:47 crc kubenswrapper[4858]: I1122 09:06:47.049755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b4271125-14af-4748-97ad-ed766b2d26b8","Type":"ContainerStarted","Data":"c222d0459fde1fa7c611a3b2d3152ef7a33b380e390942f5ad6ad138ec2d0ab9"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.192991 4858 generic.go:334] "Generic (PLEG): container finished" podID="60f0782a-2b88-48ce-99d2-9245d1140a02" containerID="ab426fa5112d67bfc0beaf50b87af3ca822e51c8d63777442b3e085b5ea17c16" exitCode=0 Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.193545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" event={"ID":"60f0782a-2b88-48ce-99d2-9245d1140a02","Type":"ContainerDied","Data":"ab426fa5112d67bfc0beaf50b87af3ca822e51c8d63777442b3e085b5ea17c16"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.197612 4858 generic.go:334] "Generic (PLEG): container finished" podID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerID="455092e0d821612e1842b08a78b2b6da19ccf4c2175e7a89b98dc3a418db3c2f" exitCode=0 Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.197702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" event={"ID":"05250187-2f7f-4ae2-b003-7bc56e49c9ea","Type":"ContainerDied","Data":"455092e0d821612e1842b08a78b2b6da19ccf4c2175e7a89b98dc3a418db3c2f"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.201716 4858 generic.go:334] "Generic (PLEG): container finished" podID="8acc1758-2810-44a3-88c3-042466670b3b" containerID="e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09" exitCode=0 Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.201810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" event={"ID":"8acc1758-2810-44a3-88c3-042466670b3b","Type":"ContainerDied","Data":"e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.215690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b4271125-14af-4748-97ad-ed766b2d26b8","Type":"ContainerStarted","Data":"0e9af0329f586f29a072f29f596f2dfaa4a85abfbc8d919d8bc5c0646f5a690e"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.217773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.224111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398c6958-f902-4b59-9afd-0275dea7251d","Type":"ContainerStarted","Data":"a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.228226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11074703-ddac-49f9-b53d-5ec6c721af7d","Type":"ContainerStarted","Data":"82d7d106549f4cab1563ffa6d0ff10088ff06828f89a22c3f44a74a78f1a2c15"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.241974 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa53f6b5-1640-4261-9800-219d00653b49" containerID="81ae437f0735df8691b857e99d39fca2af6ac84945375a192c427453f6960d5a" exitCode=0 Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.242026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" event={"ID":"aa53f6b5-1640-4261-9800-219d00653b49","Type":"ContainerDied","Data":"81ae437f0735df8691b857e99d39fca2af6ac84945375a192c427453f6960d5a"} Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.376011 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.5901755619999998 podStartE2EDuration="16.375983316s" podCreationTimestamp="2025-11-22 09:06:45 +0000 UTC" firstStartedPulling="2025-11-22 09:06:46.449195654 +0000 UTC m=+6968.290618660" lastFinishedPulling="2025-11-22 09:07:00.235003408 +0000 UTC m=+6982.076426414" observedRunningTime="2025-11-22 09:07:01.369723535 +0000 UTC m=+6983.211146541" watchObservedRunningTime="2025-11-22 09:07:01.375983316 +0000 UTC m=+6983.217406322" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.705053 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.713716 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:07:01 crc kubenswrapper[4858]: E1122 09:07:01.760568 4858 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 22 09:07:01 crc kubenswrapper[4858]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/8acc1758-2810-44a3-88c3-042466670b3b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 22 09:07:01 crc kubenswrapper[4858]: > podSandboxID="0f28dce87fe57ad148e6ed4915e83a1c4a638122003ee8c572f589b84ad45147" Nov 22 09:07:01 crc kubenswrapper[4858]: E1122 09:07:01.760804 4858 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 22 09:07:01 crc kubenswrapper[4858]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h54dhb7h666h69h76h59ch55ch65ch596h8h79h5c8h57hc8hfch5d7h697h79h698h5fch644hf9h54chbfh655hfchcbh5f8h646h5f7h89q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gl94h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c4878bb99-cq6v2_openstack(8acc1758-2810-44a3-88c3-042466670b3b): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/8acc1758-2810-44a3-88c3-042466670b3b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 22 09:07:01 crc kubenswrapper[4858]: > logger="UnhandledError" Nov 22 09:07:01 crc kubenswrapper[4858]: E1122 09:07:01.762524 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/8acc1758-2810-44a3-88c3-042466670b3b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" podUID="8acc1758-2810-44a3-88c3-042466670b3b" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.773773 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f0782a-2b88-48ce-99d2-9245d1140a02-config\") pod \"60f0782a-2b88-48ce-99d2-9245d1140a02\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.773820 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-dns-svc\") pod \"aa53f6b5-1640-4261-9800-219d00653b49\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.773867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwcpw\" (UniqueName: \"kubernetes.io/projected/60f0782a-2b88-48ce-99d2-9245d1140a02-kube-api-access-zwcpw\") pod \"60f0782a-2b88-48ce-99d2-9245d1140a02\" (UID: \"60f0782a-2b88-48ce-99d2-9245d1140a02\") " Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.773970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-config\") pod \"aa53f6b5-1640-4261-9800-219d00653b49\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.774016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns7t7\" (UniqueName: \"kubernetes.io/projected/aa53f6b5-1640-4261-9800-219d00653b49-kube-api-access-ns7t7\") pod \"aa53f6b5-1640-4261-9800-219d00653b49\" (UID: \"aa53f6b5-1640-4261-9800-219d00653b49\") " Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.779375 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f0782a-2b88-48ce-99d2-9245d1140a02-kube-api-access-zwcpw" (OuterVolumeSpecName: "kube-api-access-zwcpw") pod "60f0782a-2b88-48ce-99d2-9245d1140a02" (UID: "60f0782a-2b88-48ce-99d2-9245d1140a02"). InnerVolumeSpecName "kube-api-access-zwcpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.779537 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa53f6b5-1640-4261-9800-219d00653b49-kube-api-access-ns7t7" (OuterVolumeSpecName: "kube-api-access-ns7t7") pod "aa53f6b5-1640-4261-9800-219d00653b49" (UID: "aa53f6b5-1640-4261-9800-219d00653b49"). InnerVolumeSpecName "kube-api-access-ns7t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.793982 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-config" (OuterVolumeSpecName: "config") pod "aa53f6b5-1640-4261-9800-219d00653b49" (UID: "aa53f6b5-1640-4261-9800-219d00653b49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.794810 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa53f6b5-1640-4261-9800-219d00653b49" (UID: "aa53f6b5-1640-4261-9800-219d00653b49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.795704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f0782a-2b88-48ce-99d2-9245d1140a02-config" (OuterVolumeSpecName: "config") pod "60f0782a-2b88-48ce-99d2-9245d1140a02" (UID: "60f0782a-2b88-48ce-99d2-9245d1140a02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.876684 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwcpw\" (UniqueName: \"kubernetes.io/projected/60f0782a-2b88-48ce-99d2-9245d1140a02-kube-api-access-zwcpw\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.876736 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.876747 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns7t7\" (UniqueName: \"kubernetes.io/projected/aa53f6b5-1640-4261-9800-219d00653b49-kube-api-access-ns7t7\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.876759 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f0782a-2b88-48ce-99d2-9245d1140a02-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:01 crc kubenswrapper[4858]: I1122 09:07:01.876767 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa53f6b5-1640-4261-9800-219d00653b49-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.251903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" event={"ID":"aa53f6b5-1640-4261-9800-219d00653b49","Type":"ContainerDied","Data":"aa4c42a62a739c77c3be29273ba0ddee3f6f2d679da20acf787f51b7098d0ad5"} Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.251981 4858 scope.go:117] "RemoveContainer" containerID="81ae437f0735df8691b857e99d39fca2af6ac84945375a192c427453f6960d5a" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.251928 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5d89b87f-xmbcw" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.258418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17e056a1-9fa1-482f-9c3a-42f2b45ce40b","Type":"ContainerStarted","Data":"887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262"} Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.264353 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" event={"ID":"60f0782a-2b88-48ce-99d2-9245d1140a02","Type":"ContainerDied","Data":"2e272d7c9fd751a9aaa2887d8fbb0432cace8b45b6095b2654b5d2844b46801c"} Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.264375 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-4wvz8" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.273930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" event={"ID":"05250187-2f7f-4ae2-b003-7bc56e49c9ea","Type":"ContainerStarted","Data":"c88462a9c66a266023146e7354d16646c5e06569f65fa83252b99817a87a2f9d"} Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.274183 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.275053 4858 scope.go:117] "RemoveContainer" containerID="ab426fa5112d67bfc0beaf50b87af3ca822e51c8d63777442b3e085b5ea17c16" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.278210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d02b1a-c6cf-4409-938b-ab57a76cb248","Type":"ContainerStarted","Data":"cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97"} Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.366239 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" podStartSLOduration=4.103933806 podStartE2EDuration="21.366216389s" podCreationTimestamp="2025-11-22 09:06:41 +0000 UTC" firstStartedPulling="2025-11-22 09:06:42.972775507 +0000 UTC m=+6964.814198513" lastFinishedPulling="2025-11-22 09:07:00.23505809 +0000 UTC m=+6982.076481096" observedRunningTime="2025-11-22 09:07:02.358228064 +0000 UTC m=+6984.199667880" watchObservedRunningTime="2025-11-22 09:07:02.366216389 +0000 UTC m=+6984.207639395" Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.454130 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-4wvz8"] Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.461737 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-4wvz8"] Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.491172 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5d89b87f-xmbcw"] Nov 22 09:07:02 crc kubenswrapper[4858]: I1122 09:07:02.501278 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5d89b87f-xmbcw"] Nov 22 09:07:03 crc kubenswrapper[4858]: I1122 09:07:03.289357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" event={"ID":"8acc1758-2810-44a3-88c3-042466670b3b","Type":"ContainerStarted","Data":"b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679"} Nov 22 09:07:03 crc kubenswrapper[4858]: I1122 09:07:03.310362 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" podStartSLOduration=4.075901629 podStartE2EDuration="22.310343297s" podCreationTimestamp="2025-11-22 09:06:41 +0000 UTC" firstStartedPulling="2025-11-22 09:06:42.042007946 +0000 UTC m=+6963.883430952" lastFinishedPulling="2025-11-22 09:07:00.276449604 +0000 UTC m=+6982.117872620" observedRunningTime="2025-11-22 09:07:03.307679502 +0000 UTC m=+6985.149102498" watchObservedRunningTime="2025-11-22 09:07:03.310343297 +0000 UTC m=+6985.151766303" Nov 22 09:07:03 crc kubenswrapper[4858]: I1122 09:07:03.546554 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f0782a-2b88-48ce-99d2-9245d1140a02" path="/var/lib/kubelet/pods/60f0782a-2b88-48ce-99d2-9245d1140a02/volumes" Nov 22 09:07:03 crc kubenswrapper[4858]: I1122 09:07:03.547106 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa53f6b5-1640-4261-9800-219d00653b49" path="/var/lib/kubelet/pods/aa53f6b5-1640-4261-9800-219d00653b49/volumes" Nov 22 09:07:04 crc kubenswrapper[4858]: I1122 09:07:04.302181 4858 generic.go:334] "Generic (PLEG): container finished" podID="398c6958-f902-4b59-9afd-0275dea7251d" containerID="a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619" exitCode=0 Nov 22 09:07:04 crc kubenswrapper[4858]: I1122 09:07:04.302278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398c6958-f902-4b59-9afd-0275dea7251d","Type":"ContainerDied","Data":"a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619"} Nov 22 09:07:04 crc kubenswrapper[4858]: I1122 09:07:04.304022 4858 generic.go:334] "Generic (PLEG): container finished" podID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerID="82d7d106549f4cab1563ffa6d0ff10088ff06828f89a22c3f44a74a78f1a2c15" exitCode=0 Nov 22 09:07:04 crc kubenswrapper[4858]: I1122 09:07:04.304060 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11074703-ddac-49f9-b53d-5ec6c721af7d","Type":"ContainerDied","Data":"82d7d106549f4cab1563ffa6d0ff10088ff06828f89a22c3f44a74a78f1a2c15"} Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.313549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398c6958-f902-4b59-9afd-0275dea7251d","Type":"ContainerStarted","Data":"cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e"} Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.315528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11074703-ddac-49f9-b53d-5ec6c721af7d","Type":"ContainerStarted","Data":"d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246"} Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.344460 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.335121122 podStartE2EDuration="23.344439302s" podCreationTimestamp="2025-11-22 09:06:42 +0000 UTC" firstStartedPulling="2025-11-22 09:06:44.267048571 +0000 UTC m=+6966.108471577" lastFinishedPulling="2025-11-22 09:07:00.276366751 +0000 UTC m=+6982.117789757" observedRunningTime="2025-11-22 09:07:05.337623433 +0000 UTC m=+6987.179046439" watchObservedRunningTime="2025-11-22 09:07:05.344439302 +0000 UTC m=+6987.185862318" Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.362615 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.401617503 podStartE2EDuration="21.362598533s" podCreationTimestamp="2025-11-22 09:06:44 +0000 UTC" firstStartedPulling="2025-11-22 09:06:46.265456184 +0000 UTC m=+6968.106879190" lastFinishedPulling="2025-11-22 09:07:00.226437214 +0000 UTC m=+6982.067860220" observedRunningTime="2025-11-22 09:07:05.35812653 +0000 UTC m=+6987.199549546" watchObservedRunningTime="2025-11-22 09:07:05.362598533 +0000 UTC m=+6987.204021539" Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.554952 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.555373 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 09:07:05 crc kubenswrapper[4858]: I1122 09:07:05.970196 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 09:07:06 crc kubenswrapper[4858]: I1122 09:07:06.527107 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.319621 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.379194 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-cq6v2"] Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.379723 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" podUID="8acc1758-2810-44a3-88c3-042466670b3b" containerName="dnsmasq-dns" containerID="cri-o://b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679" gracePeriod=10 Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.381502 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.847907 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.981028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl94h\" (UniqueName: \"kubernetes.io/projected/8acc1758-2810-44a3-88c3-042466670b3b-kube-api-access-gl94h\") pod \"8acc1758-2810-44a3-88c3-042466670b3b\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.981176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-config\") pod \"8acc1758-2810-44a3-88c3-042466670b3b\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.981265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-dns-svc\") pod \"8acc1758-2810-44a3-88c3-042466670b3b\" (UID: \"8acc1758-2810-44a3-88c3-042466670b3b\") " Nov 22 09:07:07 crc kubenswrapper[4858]: I1122 09:07:07.987792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acc1758-2810-44a3-88c3-042466670b3b-kube-api-access-gl94h" (OuterVolumeSpecName: "kube-api-access-gl94h") pod "8acc1758-2810-44a3-88c3-042466670b3b" (UID: "8acc1758-2810-44a3-88c3-042466670b3b"). InnerVolumeSpecName "kube-api-access-gl94h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.020632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-config" (OuterVolumeSpecName: "config") pod "8acc1758-2810-44a3-88c3-042466670b3b" (UID: "8acc1758-2810-44a3-88c3-042466670b3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.023278 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8acc1758-2810-44a3-88c3-042466670b3b" (UID: "8acc1758-2810-44a3-88c3-042466670b3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.083075 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.083702 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8acc1758-2810-44a3-88c3-042466670b3b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.083845 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl94h\" (UniqueName: \"kubernetes.io/projected/8acc1758-2810-44a3-88c3-042466670b3b-kube-api-access-gl94h\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.348142 4858 generic.go:334] "Generic (PLEG): container finished" podID="8acc1758-2810-44a3-88c3-042466670b3b" containerID="b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679" exitCode=0 Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.348201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" event={"ID":"8acc1758-2810-44a3-88c3-042466670b3b","Type":"ContainerDied","Data":"b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679"} Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.348241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" event={"ID":"8acc1758-2810-44a3-88c3-042466670b3b","Type":"ContainerDied","Data":"0f28dce87fe57ad148e6ed4915e83a1c4a638122003ee8c572f589b84ad45147"} Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.348260 4858 scope.go:117] "RemoveContainer" containerID="b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.348278 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-cq6v2" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.374235 4858 scope.go:117] "RemoveContainer" containerID="e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.390777 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-cq6v2"] Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.400484 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-cq6v2"] Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.416502 4858 scope.go:117] "RemoveContainer" containerID="b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679" Nov 22 09:07:08 crc kubenswrapper[4858]: E1122 09:07:08.417192 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679\": container with ID starting with b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679 not found: ID does not exist" containerID="b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.417269 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679"} err="failed to get container status \"b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679\": rpc error: code = NotFound desc = could not find container \"b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679\": container with ID starting with b0f7ba3b3c74a6328fea59b33fa52e07d5d80a2b0be516c9df63372ba7de0679 not found: ID does not exist" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.417314 4858 scope.go:117] "RemoveContainer" containerID="e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09" Nov 22 09:07:08 crc kubenswrapper[4858]: E1122 09:07:08.417966 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09\": container with ID starting with e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09 not found: ID does not exist" containerID="e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09" Nov 22 09:07:08 crc kubenswrapper[4858]: I1122 09:07:08.418022 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09"} err="failed to get container status \"e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09\": rpc error: code = NotFound desc = could not find container \"e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09\": container with ID starting with e3ee59e2f5d9138739a06706b0f768fdb9ba4b37b995b8930c2f6c89ad123c09 not found: ID does not exist" Nov 22 09:07:09 crc kubenswrapper[4858]: I1122 09:07:09.547728 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8acc1758-2810-44a3-88c3-042466670b3b" path="/var/lib/kubelet/pods/8acc1758-2810-44a3-88c3-042466670b3b/volumes" Nov 22 09:07:09 crc kubenswrapper[4858]: I1122 09:07:09.636226 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 09:07:09 crc kubenswrapper[4858]: I1122 09:07:09.724287 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 09:07:13 crc kubenswrapper[4858]: I1122 09:07:13.837871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 09:07:13 crc kubenswrapper[4858]: I1122 09:07:13.837928 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 09:07:13 crc kubenswrapper[4858]: I1122 09:07:13.908836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 09:07:14 crc kubenswrapper[4858]: I1122 09:07:14.502557 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 09:07:15 crc kubenswrapper[4858]: I1122 09:07:15.312466 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:07:15 crc kubenswrapper[4858]: I1122 09:07:15.312861 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:07:34 crc kubenswrapper[4858]: I1122 09:07:34.579635 4858 generic.go:334] "Generic (PLEG): container finished" podID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerID="887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262" exitCode=0 Nov 22 09:07:34 crc kubenswrapper[4858]: I1122 09:07:34.579729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17e056a1-9fa1-482f-9c3a-42f2b45ce40b","Type":"ContainerDied","Data":"887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262"} Nov 22 09:07:34 crc kubenswrapper[4858]: I1122 09:07:34.583029 4858 generic.go:334] "Generic (PLEG): container finished" podID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerID="cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97" exitCode=0 Nov 22 09:07:34 crc kubenswrapper[4858]: I1122 09:07:34.583068 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d02b1a-c6cf-4409-938b-ab57a76cb248","Type":"ContainerDied","Data":"cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97"} Nov 22 09:07:35 crc kubenswrapper[4858]: I1122 09:07:35.601431 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d02b1a-c6cf-4409-938b-ab57a76cb248","Type":"ContainerStarted","Data":"6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7"} Nov 22 09:07:35 crc kubenswrapper[4858]: I1122 09:07:35.602176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 09:07:35 crc kubenswrapper[4858]: I1122 09:07:35.604347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17e056a1-9fa1-482f-9c3a-42f2b45ce40b","Type":"ContainerStarted","Data":"57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d"} Nov 22 09:07:35 crc kubenswrapper[4858]: I1122 09:07:35.604937 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:07:35 crc kubenswrapper[4858]: I1122 09:07:35.633124 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.359767591 podStartE2EDuration="53.633106881s" podCreationTimestamp="2025-11-22 09:06:42 +0000 UTC" firstStartedPulling="2025-11-22 09:06:43.939149567 +0000 UTC m=+6965.780572563" lastFinishedPulling="2025-11-22 09:07:00.212488847 +0000 UTC m=+6982.053911853" observedRunningTime="2025-11-22 09:07:35.629932679 +0000 UTC m=+7017.471355685" watchObservedRunningTime="2025-11-22 09:07:35.633106881 +0000 UTC m=+7017.474529887" Nov 22 09:07:35 crc kubenswrapper[4858]: I1122 09:07:35.662053 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.084633144 podStartE2EDuration="54.662031196s" podCreationTimestamp="2025-11-22 09:06:41 +0000 UTC" firstStartedPulling="2025-11-22 09:06:43.674064282 +0000 UTC m=+6965.515487288" lastFinishedPulling="2025-11-22 09:07:00.251462334 +0000 UTC m=+6982.092885340" observedRunningTime="2025-11-22 09:07:35.658127332 +0000 UTC m=+7017.499550358" watchObservedRunningTime="2025-11-22 09:07:35.662031196 +0000 UTC m=+7017.503454202" Nov 22 09:07:45 crc kubenswrapper[4858]: I1122 09:07:45.312268 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:07:45 crc kubenswrapper[4858]: I1122 09:07:45.313354 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:07:53 crc kubenswrapper[4858]: I1122 09:07:53.108650 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:07:53 crc kubenswrapper[4858]: I1122 09:07:53.458524 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.572638 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-ff6cj"] Nov 22 09:07:58 crc kubenswrapper[4858]: E1122 09:07:58.573863 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acc1758-2810-44a3-88c3-042466670b3b" containerName="dnsmasq-dns" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.573882 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acc1758-2810-44a3-88c3-042466670b3b" containerName="dnsmasq-dns" Nov 22 09:07:58 crc kubenswrapper[4858]: E1122 09:07:58.573917 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8acc1758-2810-44a3-88c3-042466670b3b" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.573929 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8acc1758-2810-44a3-88c3-042466670b3b" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: E1122 09:07:58.573952 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f0782a-2b88-48ce-99d2-9245d1140a02" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.573959 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f0782a-2b88-48ce-99d2-9245d1140a02" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: E1122 09:07:58.573972 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa53f6b5-1640-4261-9800-219d00653b49" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.573978 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa53f6b5-1640-4261-9800-219d00653b49" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.574200 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f0782a-2b88-48ce-99d2-9245d1140a02" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.574227 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8acc1758-2810-44a3-88c3-042466670b3b" containerName="dnsmasq-dns" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.574238 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa53f6b5-1640-4261-9800-219d00653b49" containerName="init" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.575311 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.582269 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-ff6cj"] Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.667396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-dns-svc\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.667910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-config\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.667938 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmnk\" (UniqueName: \"kubernetes.io/projected/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-kube-api-access-qbmnk\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.769796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-dns-svc\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.769879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-config\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.769911 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbmnk\" (UniqueName: \"kubernetes.io/projected/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-kube-api-access-qbmnk\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.770745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-config\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.770745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-dns-svc\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.794481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbmnk\" (UniqueName: \"kubernetes.io/projected/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-kube-api-access-qbmnk\") pod \"dnsmasq-dns-5bf8f59b77-ff6cj\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:58 crc kubenswrapper[4858]: I1122 09:07:58.940847 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:07:59 crc kubenswrapper[4858]: I1122 09:07:59.377830 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:07:59 crc kubenswrapper[4858]: I1122 09:07:59.415101 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-ff6cj"] Nov 22 09:07:59 crc kubenswrapper[4858]: I1122 09:07:59.816204 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:07:59 crc kubenswrapper[4858]: I1122 09:07:59.848272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" event={"ID":"a8138fdb-6c2b-443c-860a-f2fbc31b04b9","Type":"ContainerStarted","Data":"998a37e05556e7f2390ddd1bfb4230e6700e84eab75fbedff1472d107aa3f4a7"} Nov 22 09:08:00 crc kubenswrapper[4858]: I1122 09:08:00.858456 4858 generic.go:334] "Generic (PLEG): container finished" podID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerID="7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e" exitCode=0 Nov 22 09:08:00 crc kubenswrapper[4858]: I1122 09:08:00.858536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" event={"ID":"a8138fdb-6c2b-443c-860a-f2fbc31b04b9","Type":"ContainerDied","Data":"7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e"} Nov 22 09:08:01 crc kubenswrapper[4858]: I1122 09:08:01.869488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" event={"ID":"a8138fdb-6c2b-443c-860a-f2fbc31b04b9","Type":"ContainerStarted","Data":"66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1"} Nov 22 09:08:01 crc kubenswrapper[4858]: I1122 09:08:01.870626 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:08:04 crc kubenswrapper[4858]: I1122 09:08:04.056729 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerName="rabbitmq" containerID="cri-o://6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7" gracePeriod=604796 Nov 22 09:08:04 crc kubenswrapper[4858]: I1122 09:08:04.747283 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerName="rabbitmq" containerID="cri-o://57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d" gracePeriod=604796 Nov 22 09:08:08 crc kubenswrapper[4858]: I1122 09:08:08.942484 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:08:08 crc kubenswrapper[4858]: I1122 09:08:08.976982 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" podStartSLOduration=10.976956433 podStartE2EDuration="10.976956433s" podCreationTimestamp="2025-11-22 09:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:08:01.900092958 +0000 UTC m=+7043.741515964" watchObservedRunningTime="2025-11-22 09:08:08.976956433 +0000 UTC m=+7050.818379429" Nov 22 09:08:08 crc kubenswrapper[4858]: I1122 09:08:08.995591 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-l4945"] Nov 22 09:08:08 crc kubenswrapper[4858]: I1122 09:08:08.995842 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerName="dnsmasq-dns" containerID="cri-o://c88462a9c66a266023146e7354d16646c5e06569f65fa83252b99817a87a2f9d" gracePeriod=10 Nov 22 09:08:09 crc kubenswrapper[4858]: I1122 09:08:09.937239 4858 generic.go:334] "Generic (PLEG): container finished" podID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerID="c88462a9c66a266023146e7354d16646c5e06569f65fa83252b99817a87a2f9d" exitCode=0 Nov 22 09:08:09 crc kubenswrapper[4858]: I1122 09:08:09.937311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" event={"ID":"05250187-2f7f-4ae2-b003-7bc56e49c9ea","Type":"ContainerDied","Data":"c88462a9c66a266023146e7354d16646c5e06569f65fa83252b99817a87a2f9d"} Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.650722 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.763492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-dns-svc\") pod \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.764049 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69hrt\" (UniqueName: \"kubernetes.io/projected/05250187-2f7f-4ae2-b003-7bc56e49c9ea-kube-api-access-69hrt\") pod \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.764772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-config\") pod \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\" (UID: \"05250187-2f7f-4ae2-b003-7bc56e49c9ea\") " Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.770677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05250187-2f7f-4ae2-b003-7bc56e49c9ea-kube-api-access-69hrt" (OuterVolumeSpecName: "kube-api-access-69hrt") pod "05250187-2f7f-4ae2-b003-7bc56e49c9ea" (UID: "05250187-2f7f-4ae2-b003-7bc56e49c9ea"). InnerVolumeSpecName "kube-api-access-69hrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.806375 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05250187-2f7f-4ae2-b003-7bc56e49c9ea" (UID: "05250187-2f7f-4ae2-b003-7bc56e49c9ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.810685 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-config" (OuterVolumeSpecName: "config") pod "05250187-2f7f-4ae2-b003-7bc56e49c9ea" (UID: "05250187-2f7f-4ae2-b003-7bc56e49c9ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.868699 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.868922 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69hrt\" (UniqueName: \"kubernetes.io/projected/05250187-2f7f-4ae2-b003-7bc56e49c9ea-kube-api-access-69hrt\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.868977 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05250187-2f7f-4ae2-b003-7bc56e49c9ea-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.952299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" event={"ID":"05250187-2f7f-4ae2-b003-7bc56e49c9ea","Type":"ContainerDied","Data":"34fee597f3cc6ff701f94bf70974dc496709573b1590615048fe4db5fda122d7"} Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.952401 4858 scope.go:117] "RemoveContainer" containerID="c88462a9c66a266023146e7354d16646c5e06569f65fa83252b99817a87a2f9d" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.952555 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-l4945" Nov 22 09:08:10 crc kubenswrapper[4858]: I1122 09:08:10.994395 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-l4945"] Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.000706 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-l4945"] Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.093440 4858 scope.go:117] "RemoveContainer" containerID="455092e0d821612e1842b08a78b2b6da19ccf4c2175e7a89b98dc3a418db3c2f" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.393141 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.483392 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czmf\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-kube-api-access-2czmf\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.483470 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-plugins\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.483530 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-erlang-cookie\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.483988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-tls\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-plugins-conf\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-server-conf\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484224 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d02b1a-c6cf-4409-938b-ab57a76cb248-pod-info\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484254 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-confd\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d02b1a-c6cf-4409-938b-ab57a76cb248-erlang-cookie-secret\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.484395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-config-data\") pod \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\" (UID: \"e4d02b1a-c6cf-4409-938b-ab57a76cb248\") " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.487499 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.491729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.492551 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.492954 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-kube-api-access-2czmf" (OuterVolumeSpecName: "kube-api-access-2czmf") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "kube-api-access-2czmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.493830 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.500640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d02b1a-c6cf-4409-938b-ab57a76cb248-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.501463 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e4d02b1a-c6cf-4409-938b-ab57a76cb248-pod-info" (OuterVolumeSpecName: "pod-info") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.510072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f" (OuterVolumeSpecName: "persistence") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.511790 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-config-data" (OuterVolumeSpecName: "config-data") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.536271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-server-conf" (OuterVolumeSpecName: "server-conf") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.548468 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" path="/var/lib/kubelet/pods/05250187-2f7f-4ae2-b003-7bc56e49c9ea/volumes" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.587886 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e4d02b1a-c6cf-4409-938b-ab57a76cb248" (UID: "e4d02b1a-c6cf-4409-938b-ab57a76cb248"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588073 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2czmf\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-kube-api-access-2czmf\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588222 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588238 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588371 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") on node \"crc\" " Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588457 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588534 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588584 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588599 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d02b1a-c6cf-4409-938b-ab57a76cb248-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588612 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d02b1a-c6cf-4409-938b-ab57a76cb248-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.588625 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d02b1a-c6cf-4409-938b-ab57a76cb248-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.610708 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.610995 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f") on node "crc" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.690297 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.690353 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d02b1a-c6cf-4409-938b-ab57a76cb248-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.965882 4858 generic.go:334] "Generic (PLEG): container finished" podID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerID="6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7" exitCode=0 Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.965985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d02b1a-c6cf-4409-938b-ab57a76cb248","Type":"ContainerDied","Data":"6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7"} Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.966066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d02b1a-c6cf-4409-938b-ab57a76cb248","Type":"ContainerDied","Data":"fd3e80e90c7160b409c0f6c14177a03e25895fd98bfffccf6993e3adad0a969e"} Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.966085 4858 scope.go:117] "RemoveContainer" containerID="6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7" Nov 22 09:08:11 crc kubenswrapper[4858]: I1122 09:08:11.966009 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.021986 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.026847 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.051523 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:08:12 crc kubenswrapper[4858]: E1122 09:08:12.052128 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerName="rabbitmq" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.052153 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerName="rabbitmq" Nov 22 09:08:12 crc kubenswrapper[4858]: E1122 09:08:12.052194 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerName="dnsmasq-dns" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.052201 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerName="dnsmasq-dns" Nov 22 09:08:12 crc kubenswrapper[4858]: E1122 09:08:12.052223 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerName="setup-container" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.052236 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerName="setup-container" Nov 22 09:08:12 crc kubenswrapper[4858]: E1122 09:08:12.052256 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerName="init" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.052264 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerName="init" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.052461 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" containerName="rabbitmq" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.052478 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="05250187-2f7f-4ae2-b003-7bc56e49c9ea" containerName="dnsmasq-dns" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.054857 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.059124 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.059437 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bzc8j" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.059565 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.062723 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.064548 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.064751 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.064942 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.068943 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.069985 4858 scope.go:117] "RemoveContainer" containerID="cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.098972 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.100036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.100310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.109594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4kv\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-kube-api-access-jr4kv\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.109829 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.109899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.109937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.109979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.110007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59060e41-09d2-4441-8563-5302fd77a52d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.110048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59060e41-09d2-4441-8563-5302fd77a52d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.110112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.115967 4858 scope.go:117] "RemoveContainer" containerID="6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7" Nov 22 09:08:12 crc kubenswrapper[4858]: E1122 09:08:12.119132 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7\": container with ID starting with 6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7 not found: ID does not exist" containerID="6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.119196 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7"} err="failed to get container status \"6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7\": rpc error: code = NotFound desc = could not find container \"6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7\": container with ID starting with 6201af0fc4cfee80111cefee123a291ab669ede5efcf685562d2876f9280dbd7 not found: ID does not exist" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.119252 4858 scope.go:117] "RemoveContainer" containerID="cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97" Nov 22 09:08:12 crc kubenswrapper[4858]: E1122 09:08:12.120011 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97\": container with ID starting with cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97 not found: ID does not exist" containerID="cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.120074 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97"} err="failed to get container status \"cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97\": rpc error: code = NotFound desc = could not find container \"cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97\": container with ID starting with cff9b5303528223bb37b5b4a4262f2a65efd14115cf2300a57cdebba2fd59b97 not found: ID does not exist" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4kv\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-kube-api-access-jr4kv\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59060e41-09d2-4441-8563-5302fd77a52d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.212550 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59060e41-09d2-4441-8563-5302fd77a52d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.213025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.213069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.213108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.213125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.213797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.215075 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.216954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.220882 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59060e41-09d2-4441-8563-5302fd77a52d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.221070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.221102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.223064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59060e41-09d2-4441-8563-5302fd77a52d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.223892 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.223969 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8c0cb2414112c3f79a17a93c397588a9527ebe61840fd9b61167ee318ce72b09/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.276231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4kv\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-kube-api-access-jr4kv\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.313080 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"rabbitmq-server-0\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.360622 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.403308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418008 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-erlang-cookie\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418071 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-pod-info\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418134 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-plugins-conf\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418177 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9lz9\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-kube-api-access-s9lz9\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-erlang-cookie-secret\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418253 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-confd\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-plugins\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-server-conf\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418399 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-config-data\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418453 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-tls\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.418582 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\" (UID: \"17e056a1-9fa1-482f-9c3a-42f2b45ce40b\") " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.419023 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.419604 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.421591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.421920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-pod-info" (OuterVolumeSpecName: "pod-info") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.424888 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.425064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-kube-api-access-s9lz9" (OuterVolumeSpecName: "kube-api-access-s9lz9") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "kube-api-access-s9lz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.426526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.429969 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079" (OuterVolumeSpecName: "persistence") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.449492 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-config-data" (OuterVolumeSpecName: "config-data") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.469343 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-server-conf" (OuterVolumeSpecName: "server-conf") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521127 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521174 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521189 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521202 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9lz9\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-kube-api-access-s9lz9\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521214 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521224 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521236 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521248 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521258 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.521299 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") on node \"crc\" " Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.524669 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "17e056a1-9fa1-482f-9c3a-42f2b45ce40b" (UID: "17e056a1-9fa1-482f-9c3a-42f2b45ce40b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.544984 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.545511 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079") on node "crc" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.623262 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17e056a1-9fa1-482f-9c3a-42f2b45ce40b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.623656 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.928732 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.986692 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59060e41-09d2-4441-8563-5302fd77a52d","Type":"ContainerStarted","Data":"97895e7fd29c018ddbfbfc26421fe78f6078d6297fcc2c821595d8c5df1e2ea2"} Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.989692 4858 generic.go:334] "Generic (PLEG): container finished" podID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerID="57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d" exitCode=0 Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.989745 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.989815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17e056a1-9fa1-482f-9c3a-42f2b45ce40b","Type":"ContainerDied","Data":"57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d"} Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.989869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17e056a1-9fa1-482f-9c3a-42f2b45ce40b","Type":"ContainerDied","Data":"1263782ee5804a6c5d1bb4e2aa564ec569fa770029ae7a895c955a0680e7e34f"} Nov 22 09:08:12 crc kubenswrapper[4858]: I1122 09:08:12.989892 4858 scope.go:117] "RemoveContainer" containerID="57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.032117 4858 scope.go:117] "RemoveContainer" containerID="887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.055964 4858 scope.go:117] "RemoveContainer" containerID="57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d" Nov 22 09:08:13 crc kubenswrapper[4858]: E1122 09:08:13.056548 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d\": container with ID starting with 57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d not found: ID does not exist" containerID="57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.056612 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d"} err="failed to get container status \"57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d\": rpc error: code = NotFound desc = could not find container \"57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d\": container with ID starting with 57d82a928ce163963602d7b20e66d93d52b15550095368db2d723309c73c740d not found: ID does not exist" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.056644 4858 scope.go:117] "RemoveContainer" containerID="887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262" Nov 22 09:08:13 crc kubenswrapper[4858]: E1122 09:08:13.056931 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262\": container with ID starting with 887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262 not found: ID does not exist" containerID="887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.056956 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262"} err="failed to get container status \"887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262\": rpc error: code = NotFound desc = could not find container \"887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262\": container with ID starting with 887cebddb79de8f63959e459e6fe1edc215ed868e6681ea8ea04694f201ac262 not found: ID does not exist" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.060379 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.064600 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.083664 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:08:13 crc kubenswrapper[4858]: E1122 09:08:13.084116 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerName="rabbitmq" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.084130 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerName="rabbitmq" Nov 22 09:08:13 crc kubenswrapper[4858]: E1122 09:08:13.084155 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerName="setup-container" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.084162 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerName="setup-container" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.084308 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" containerName="rabbitmq" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.085181 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.088126 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.088167 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.089112 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.089971 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.091028 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4dqbs" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.091744 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.091773 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.111944 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.131572 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.131975 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.132971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlcvp\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-kube-api-access-jlcvp\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8384e15-b249-44a6-8d35-8a2066b3da7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133449 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8384e15-b249-44a6-8d35-8a2066b3da7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133506 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.133789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235749 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlcvp\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-kube-api-access-jlcvp\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8384e15-b249-44a6-8d35-8a2066b3da7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8384e15-b249-44a6-8d35-8a2066b3da7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235903 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.235990 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.236007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.236395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.237000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.237452 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.238159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.238570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.241275 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.241340 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ab47d8670df5e82a50528381189c1984e72821a7c18b99b9894f87a7daf2012d/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.242578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8384e15-b249-44a6-8d35-8a2066b3da7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.242611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.242731 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.246748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8384e15-b249-44a6-8d35-8a2066b3da7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.257415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlcvp\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-kube-api-access-jlcvp\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.269808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"rabbitmq-cell1-server-0\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.441349 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.566350 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e056a1-9fa1-482f-9c3a-42f2b45ce40b" path="/var/lib/kubelet/pods/17e056a1-9fa1-482f-9c3a-42f2b45ce40b/volumes" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.567085 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d02b1a-c6cf-4409-938b-ab57a76cb248" path="/var/lib/kubelet/pods/e4d02b1a-c6cf-4409-938b-ab57a76cb248/volumes" Nov 22 09:08:13 crc kubenswrapper[4858]: I1122 09:08:13.900789 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:08:14 crc kubenswrapper[4858]: I1122 09:08:14.004531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8384e15-b249-44a6-8d35-8a2066b3da7b","Type":"ContainerStarted","Data":"d3e620de4d5b38f5a3b6436d53ac32053199e5dc69112d9c4a5a0c39c93238da"} Nov 22 09:08:15 crc kubenswrapper[4858]: I1122 09:08:15.019904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59060e41-09d2-4441-8563-5302fd77a52d","Type":"ContainerStarted","Data":"eeba1d571add7d50c07f586327e32795ac943ddbbdd3bc346a8173c54be363a8"} Nov 22 09:08:15 crc kubenswrapper[4858]: I1122 09:08:15.312675 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:08:15 crc kubenswrapper[4858]: I1122 09:08:15.312777 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:08:15 crc kubenswrapper[4858]: I1122 09:08:15.312859 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:08:15 crc kubenswrapper[4858]: I1122 09:08:15.314248 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3a5f20f89a78131083b1d0f65c8f1a491a734a067a67b704012c5563057b454"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:08:15 crc kubenswrapper[4858]: I1122 09:08:15.314354 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://d3a5f20f89a78131083b1d0f65c8f1a491a734a067a67b704012c5563057b454" gracePeriod=600 Nov 22 09:08:16 crc kubenswrapper[4858]: I1122 09:08:16.037164 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="d3a5f20f89a78131083b1d0f65c8f1a491a734a067a67b704012c5563057b454" exitCode=0 Nov 22 09:08:16 crc kubenswrapper[4858]: I1122 09:08:16.037240 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"d3a5f20f89a78131083b1d0f65c8f1a491a734a067a67b704012c5563057b454"} Nov 22 09:08:16 crc kubenswrapper[4858]: I1122 09:08:16.037326 4858 scope.go:117] "RemoveContainer" containerID="2799b2fdba6ca6b427723b1d2093eec7c9c1589a8e784c063172c467a0acc356" Nov 22 09:08:16 crc kubenswrapper[4858]: I1122 09:08:16.040524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8384e15-b249-44a6-8d35-8a2066b3da7b","Type":"ContainerStarted","Data":"3538fb3f59251c148ca9ef352cd6933fb18a4d40fa1eeb03004322fc80fe564d"} Nov 22 09:08:17 crc kubenswrapper[4858]: I1122 09:08:17.053746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f"} Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.349206 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9ncb"] Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.352449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.372149 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9ncb"] Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.533587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsnv\" (UniqueName: \"kubernetes.io/projected/2dd88c0d-f590-468e-af46-897a30d14040-kube-api-access-qpsnv\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.533680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-catalog-content\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.533992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-utilities\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.636296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsnv\" (UniqueName: \"kubernetes.io/projected/2dd88c0d-f590-468e-af46-897a30d14040-kube-api-access-qpsnv\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.636821 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-catalog-content\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.636942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-utilities\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.637769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-catalog-content\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.637919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-utilities\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.667677 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsnv\" (UniqueName: \"kubernetes.io/projected/2dd88c0d-f590-468e-af46-897a30d14040-kube-api-access-qpsnv\") pod \"redhat-operators-k9ncb\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:38 crc kubenswrapper[4858]: I1122 09:08:38.680001 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:39 crc kubenswrapper[4858]: I1122 09:08:39.194185 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9ncb"] Nov 22 09:08:39 crc kubenswrapper[4858]: I1122 09:08:39.315090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerStarted","Data":"2cf6b8fd49c7555533d797dabc267cd3901222887049f17140ac9365bfed2eb1"} Nov 22 09:08:40 crc kubenswrapper[4858]: I1122 09:08:40.326538 4858 generic.go:334] "Generic (PLEG): container finished" podID="2dd88c0d-f590-468e-af46-897a30d14040" containerID="755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41" exitCode=0 Nov 22 09:08:40 crc kubenswrapper[4858]: I1122 09:08:40.326614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerDied","Data":"755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41"} Nov 22 09:08:41 crc kubenswrapper[4858]: I1122 09:08:41.341044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerStarted","Data":"c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd"} Nov 22 09:08:42 crc kubenswrapper[4858]: I1122 09:08:42.351670 4858 generic.go:334] "Generic (PLEG): container finished" podID="2dd88c0d-f590-468e-af46-897a30d14040" containerID="c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd" exitCode=0 Nov 22 09:08:42 crc kubenswrapper[4858]: I1122 09:08:42.351775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerDied","Data":"c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd"} Nov 22 09:08:43 crc kubenswrapper[4858]: I1122 09:08:43.363911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerStarted","Data":"677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4"} Nov 22 09:08:43 crc kubenswrapper[4858]: I1122 09:08:43.387499 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9ncb" podStartSLOduration=2.9180296070000002 podStartE2EDuration="5.387472155s" podCreationTimestamp="2025-11-22 09:08:38 +0000 UTC" firstStartedPulling="2025-11-22 09:08:40.329395267 +0000 UTC m=+7082.170818273" lastFinishedPulling="2025-11-22 09:08:42.798837815 +0000 UTC m=+7084.640260821" observedRunningTime="2025-11-22 09:08:43.38326671 +0000 UTC m=+7085.224689736" watchObservedRunningTime="2025-11-22 09:08:43.387472155 +0000 UTC m=+7085.228895161" Nov 22 09:08:48 crc kubenswrapper[4858]: I1122 09:08:48.411523 4858 generic.go:334] "Generic (PLEG): container finished" podID="59060e41-09d2-4441-8563-5302fd77a52d" containerID="eeba1d571add7d50c07f586327e32795ac943ddbbdd3bc346a8173c54be363a8" exitCode=0 Nov 22 09:08:48 crc kubenswrapper[4858]: I1122 09:08:48.411647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59060e41-09d2-4441-8563-5302fd77a52d","Type":"ContainerDied","Data":"eeba1d571add7d50c07f586327e32795ac943ddbbdd3bc346a8173c54be363a8"} Nov 22 09:08:48 crc kubenswrapper[4858]: I1122 09:08:48.680571 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:48 crc kubenswrapper[4858]: I1122 09:08:48.681701 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:48 crc kubenswrapper[4858]: I1122 09:08:48.730334 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:49 crc kubenswrapper[4858]: I1122 09:08:49.426052 4858 generic.go:334] "Generic (PLEG): container finished" podID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerID="3538fb3f59251c148ca9ef352cd6933fb18a4d40fa1eeb03004322fc80fe564d" exitCode=0 Nov 22 09:08:49 crc kubenswrapper[4858]: I1122 09:08:49.426209 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8384e15-b249-44a6-8d35-8a2066b3da7b","Type":"ContainerDied","Data":"3538fb3f59251c148ca9ef352cd6933fb18a4d40fa1eeb03004322fc80fe564d"} Nov 22 09:08:49 crc kubenswrapper[4858]: I1122 09:08:49.481809 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:49 crc kubenswrapper[4858]: I1122 09:08:49.535085 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9ncb"] Nov 22 09:08:50 crc kubenswrapper[4858]: I1122 09:08:50.448513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8384e15-b249-44a6-8d35-8a2066b3da7b","Type":"ContainerStarted","Data":"7a1b9aa9bf7fdcfe3b6dd842717d88716652a749a754b92b43ad5226f5e6ec33"} Nov 22 09:08:50 crc kubenswrapper[4858]: I1122 09:08:50.450250 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:08:50 crc kubenswrapper[4858]: I1122 09:08:50.454715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59060e41-09d2-4441-8563-5302fd77a52d","Type":"ContainerStarted","Data":"48243763ff91a842163928192fc2ea246f302325792033ccd2427519d16f31b0"} Nov 22 09:08:50 crc kubenswrapper[4858]: I1122 09:08:50.455015 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 09:08:50 crc kubenswrapper[4858]: I1122 09:08:50.485265 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.485236618 podStartE2EDuration="37.485236618s" podCreationTimestamp="2025-11-22 09:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:08:50.4818597 +0000 UTC m=+7092.323282726" watchObservedRunningTime="2025-11-22 09:08:50.485236618 +0000 UTC m=+7092.326659624" Nov 22 09:08:51 crc kubenswrapper[4858]: I1122 09:08:51.465484 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9ncb" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="registry-server" containerID="cri-o://677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4" gracePeriod=2 Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.433397 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.461642 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.461623405 podStartE2EDuration="40.461623405s" podCreationTimestamp="2025-11-22 09:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:08:50.518868024 +0000 UTC m=+7092.360291030" watchObservedRunningTime="2025-11-22 09:08:52.461623405 +0000 UTC m=+7094.303046411" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.483904 4858 generic.go:334] "Generic (PLEG): container finished" podID="2dd88c0d-f590-468e-af46-897a30d14040" containerID="677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4" exitCode=0 Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.484849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerDied","Data":"677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4"} Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.484935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9ncb" event={"ID":"2dd88c0d-f590-468e-af46-897a30d14040","Type":"ContainerDied","Data":"2cf6b8fd49c7555533d797dabc267cd3901222887049f17140ac9365bfed2eb1"} Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.484999 4858 scope.go:117] "RemoveContainer" containerID="677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.485185 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9ncb" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.508127 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-utilities\") pod \"2dd88c0d-f590-468e-af46-897a30d14040\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.508296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsnv\" (UniqueName: \"kubernetes.io/projected/2dd88c0d-f590-468e-af46-897a30d14040-kube-api-access-qpsnv\") pod \"2dd88c0d-f590-468e-af46-897a30d14040\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.508361 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-catalog-content\") pod \"2dd88c0d-f590-468e-af46-897a30d14040\" (UID: \"2dd88c0d-f590-468e-af46-897a30d14040\") " Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.511185 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-utilities" (OuterVolumeSpecName: "utilities") pod "2dd88c0d-f590-468e-af46-897a30d14040" (UID: "2dd88c0d-f590-468e-af46-897a30d14040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.522388 4858 scope.go:117] "RemoveContainer" containerID="c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.532228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd88c0d-f590-468e-af46-897a30d14040-kube-api-access-qpsnv" (OuterVolumeSpecName: "kube-api-access-qpsnv") pod "2dd88c0d-f590-468e-af46-897a30d14040" (UID: "2dd88c0d-f590-468e-af46-897a30d14040"). InnerVolumeSpecName "kube-api-access-qpsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.563174 4858 scope.go:117] "RemoveContainer" containerID="755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.595431 4858 scope.go:117] "RemoveContainer" containerID="677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4" Nov 22 09:08:52 crc kubenswrapper[4858]: E1122 09:08:52.597962 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4\": container with ID starting with 677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4 not found: ID does not exist" containerID="677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.598055 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4"} err="failed to get container status \"677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4\": rpc error: code = NotFound desc = could not find container \"677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4\": container with ID starting with 677938bfb4536f79879a639552061c03294a2ffa4d424c0318447c005f56a6c4 not found: ID does not exist" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.598116 4858 scope.go:117] "RemoveContainer" containerID="c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd" Nov 22 09:08:52 crc kubenswrapper[4858]: E1122 09:08:52.598779 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd\": container with ID starting with c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd not found: ID does not exist" containerID="c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.598826 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd"} err="failed to get container status \"c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd\": rpc error: code = NotFound desc = could not find container \"c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd\": container with ID starting with c777e8e93b54ce1c704315176dd5c8df62e5d6d7c3fa4fab2619a95179466bcd not found: ID does not exist" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.598856 4858 scope.go:117] "RemoveContainer" containerID="755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41" Nov 22 09:08:52 crc kubenswrapper[4858]: E1122 09:08:52.599447 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41\": container with ID starting with 755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41 not found: ID does not exist" containerID="755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.599767 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41"} err="failed to get container status \"755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41\": rpc error: code = NotFound desc = could not find container \"755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41\": container with ID starting with 755c55105098f3d869db4e4eec45beec24b8cde10b5bacb4e5d800eb0d4aae41 not found: ID does not exist" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.611214 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.611258 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsnv\" (UniqueName: \"kubernetes.io/projected/2dd88c0d-f590-468e-af46-897a30d14040-kube-api-access-qpsnv\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.635601 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dd88c0d-f590-468e-af46-897a30d14040" (UID: "2dd88c0d-f590-468e-af46-897a30d14040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.714633 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd88c0d-f590-468e-af46-897a30d14040-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.824056 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9ncb"] Nov 22 09:08:52 crc kubenswrapper[4858]: I1122 09:08:52.835655 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9ncb"] Nov 22 09:08:53 crc kubenswrapper[4858]: I1122 09:08:53.547823 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd88c0d-f590-468e-af46-897a30d14040" path="/var/lib/kubelet/pods/2dd88c0d-f590-468e-af46-897a30d14040/volumes" Nov 22 09:09:02 crc kubenswrapper[4858]: I1122 09:09:02.406823 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 09:09:03 crc kubenswrapper[4858]: I1122 09:09:03.444661 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.321738 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 09:09:06 crc kubenswrapper[4858]: E1122 09:09:06.323291 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="registry-server" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.323311 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="registry-server" Nov 22 09:09:06 crc kubenswrapper[4858]: E1122 09:09:06.323343 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="extract-content" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.323350 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="extract-content" Nov 22 09:09:06 crc kubenswrapper[4858]: E1122 09:09:06.323364 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="extract-utilities" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.323372 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="extract-utilities" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.323581 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd88c0d-f590-468e-af46-897a30d14040" containerName="registry-server" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.324264 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.327358 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rphst" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.332091 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.368774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2nf6\" (UniqueName: \"kubernetes.io/projected/6194ffaf-55ea-4e91-b3ba-9353cdb629d2-kube-api-access-f2nf6\") pod \"mariadb-client-1-default\" (UID: \"6194ffaf-55ea-4e91-b3ba-9353cdb629d2\") " pod="openstack/mariadb-client-1-default" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.471346 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2nf6\" (UniqueName: \"kubernetes.io/projected/6194ffaf-55ea-4e91-b3ba-9353cdb629d2-kube-api-access-f2nf6\") pod \"mariadb-client-1-default\" (UID: \"6194ffaf-55ea-4e91-b3ba-9353cdb629d2\") " pod="openstack/mariadb-client-1-default" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.498806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2nf6\" (UniqueName: \"kubernetes.io/projected/6194ffaf-55ea-4e91-b3ba-9353cdb629d2-kube-api-access-f2nf6\") pod \"mariadb-client-1-default\" (UID: \"6194ffaf-55ea-4e91-b3ba-9353cdb629d2\") " pod="openstack/mariadb-client-1-default" Nov 22 09:09:06 crc kubenswrapper[4858]: I1122 09:09:06.647056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 09:09:07 crc kubenswrapper[4858]: I1122 09:09:07.206031 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 09:09:07 crc kubenswrapper[4858]: I1122 09:09:07.221654 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:09:07 crc kubenswrapper[4858]: I1122 09:09:07.627347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"6194ffaf-55ea-4e91-b3ba-9353cdb629d2","Type":"ContainerStarted","Data":"41db8818c31066b39e6c399a3d3fa8250a238c8764459eb9ca1e24aa70a9da72"} Nov 22 09:09:10 crc kubenswrapper[4858]: I1122 09:09:10.654114 4858 generic.go:334] "Generic (PLEG): container finished" podID="6194ffaf-55ea-4e91-b3ba-9353cdb629d2" containerID="2f565e19489e9b5580cd2ebb62a9f7a919ca4638f31c7d1bcc528fd9cb703f09" exitCode=0 Nov 22 09:09:10 crc kubenswrapper[4858]: I1122 09:09:10.654182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"6194ffaf-55ea-4e91-b3ba-9353cdb629d2","Type":"ContainerDied","Data":"2f565e19489e9b5580cd2ebb62a9f7a919ca4638f31c7d1bcc528fd9cb703f09"} Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.288137 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.317555 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_6194ffaf-55ea-4e91-b3ba-9353cdb629d2/mariadb-client-1-default/0.log" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.342754 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.348872 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.372576 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2nf6\" (UniqueName: \"kubernetes.io/projected/6194ffaf-55ea-4e91-b3ba-9353cdb629d2-kube-api-access-f2nf6\") pod \"6194ffaf-55ea-4e91-b3ba-9353cdb629d2\" (UID: \"6194ffaf-55ea-4e91-b3ba-9353cdb629d2\") " Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.380784 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6194ffaf-55ea-4e91-b3ba-9353cdb629d2-kube-api-access-f2nf6" (OuterVolumeSpecName: "kube-api-access-f2nf6") pod "6194ffaf-55ea-4e91-b3ba-9353cdb629d2" (UID: "6194ffaf-55ea-4e91-b3ba-9353cdb629d2"). InnerVolumeSpecName "kube-api-access-f2nf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.476036 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2nf6\" (UniqueName: \"kubernetes.io/projected/6194ffaf-55ea-4e91-b3ba-9353cdb629d2-kube-api-access-f2nf6\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.686339 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41db8818c31066b39e6c399a3d3fa8250a238c8764459eb9ca1e24aa70a9da72" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.687058 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.935144 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 09:09:12 crc kubenswrapper[4858]: E1122 09:09:12.935586 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6194ffaf-55ea-4e91-b3ba-9353cdb629d2" containerName="mariadb-client-1-default" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.935607 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6194ffaf-55ea-4e91-b3ba-9353cdb629d2" containerName="mariadb-client-1-default" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.935806 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6194ffaf-55ea-4e91-b3ba-9353cdb629d2" containerName="mariadb-client-1-default" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.936619 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.945755 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rphst" Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.954206 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 09:09:12 crc kubenswrapper[4858]: I1122 09:09:12.986121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4kn\" (UniqueName: \"kubernetes.io/projected/57797edf-e9d7-45f0-809c-05439b7a53a7-kube-api-access-5j4kn\") pod \"mariadb-client-2-default\" (UID: \"57797edf-e9d7-45f0-809c-05439b7a53a7\") " pod="openstack/mariadb-client-2-default" Nov 22 09:09:13 crc kubenswrapper[4858]: I1122 09:09:13.088829 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4kn\" (UniqueName: \"kubernetes.io/projected/57797edf-e9d7-45f0-809c-05439b7a53a7-kube-api-access-5j4kn\") pod \"mariadb-client-2-default\" (UID: \"57797edf-e9d7-45f0-809c-05439b7a53a7\") " pod="openstack/mariadb-client-2-default" Nov 22 09:09:13 crc kubenswrapper[4858]: I1122 09:09:13.110720 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4kn\" (UniqueName: \"kubernetes.io/projected/57797edf-e9d7-45f0-809c-05439b7a53a7-kube-api-access-5j4kn\") pod \"mariadb-client-2-default\" (UID: \"57797edf-e9d7-45f0-809c-05439b7a53a7\") " pod="openstack/mariadb-client-2-default" Nov 22 09:09:13 crc kubenswrapper[4858]: I1122 09:09:13.262850 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 09:09:13 crc kubenswrapper[4858]: I1122 09:09:13.546767 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6194ffaf-55ea-4e91-b3ba-9353cdb629d2" path="/var/lib/kubelet/pods/6194ffaf-55ea-4e91-b3ba-9353cdb629d2/volumes" Nov 22 09:09:13 crc kubenswrapper[4858]: I1122 09:09:13.805785 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 09:09:13 crc kubenswrapper[4858]: W1122 09:09:13.809216 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57797edf_e9d7_45f0_809c_05439b7a53a7.slice/crio-31a3a6de0ca298263895f8cc87f80ca722855eab92769383139acce606cbb14a WatchSource:0}: Error finding container 31a3a6de0ca298263895f8cc87f80ca722855eab92769383139acce606cbb14a: Status 404 returned error can't find the container with id 31a3a6de0ca298263895f8cc87f80ca722855eab92769383139acce606cbb14a Nov 22 09:09:14 crc kubenswrapper[4858]: I1122 09:09:14.705386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"57797edf-e9d7-45f0-809c-05439b7a53a7","Type":"ContainerStarted","Data":"b30277866647cc8c57098c572b863222023714d75cf69ba1112e5f514c42ddfc"} Nov 22 09:09:14 crc kubenswrapper[4858]: I1122 09:09:14.705777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"57797edf-e9d7-45f0-809c-05439b7a53a7","Type":"ContainerStarted","Data":"31a3a6de0ca298263895f8cc87f80ca722855eab92769383139acce606cbb14a"} Nov 22 09:09:14 crc kubenswrapper[4858]: I1122 09:09:14.720743 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=2.720702626 podStartE2EDuration="2.720702626s" podCreationTimestamp="2025-11-22 09:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:09:14.719489446 +0000 UTC m=+7116.560912462" watchObservedRunningTime="2025-11-22 09:09:14.720702626 +0000 UTC m=+7116.562125642" Nov 22 09:09:15 crc kubenswrapper[4858]: I1122 09:09:15.302687 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_57797edf-e9d7-45f0-809c-05439b7a53a7/mariadb-client-2-default/0.log" Nov 22 09:09:15 crc kubenswrapper[4858]: I1122 09:09:15.716440 4858 generic.go:334] "Generic (PLEG): container finished" podID="57797edf-e9d7-45f0-809c-05439b7a53a7" containerID="b30277866647cc8c57098c572b863222023714d75cf69ba1112e5f514c42ddfc" exitCode=0 Nov 22 09:09:15 crc kubenswrapper[4858]: I1122 09:09:15.716570 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"57797edf-e9d7-45f0-809c-05439b7a53a7","Type":"ContainerDied","Data":"b30277866647cc8c57098c572b863222023714d75cf69ba1112e5f514c42ddfc"} Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.106228 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.147882 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.154689 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.168741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j4kn\" (UniqueName: \"kubernetes.io/projected/57797edf-e9d7-45f0-809c-05439b7a53a7-kube-api-access-5j4kn\") pod \"57797edf-e9d7-45f0-809c-05439b7a53a7\" (UID: \"57797edf-e9d7-45f0-809c-05439b7a53a7\") " Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.176920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57797edf-e9d7-45f0-809c-05439b7a53a7-kube-api-access-5j4kn" (OuterVolumeSpecName: "kube-api-access-5j4kn") pod "57797edf-e9d7-45f0-809c-05439b7a53a7" (UID: "57797edf-e9d7-45f0-809c-05439b7a53a7"). InnerVolumeSpecName "kube-api-access-5j4kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.261514 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-3-default"] Nov 22 09:09:17 crc kubenswrapper[4858]: E1122 09:09:17.261971 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57797edf-e9d7-45f0-809c-05439b7a53a7" containerName="mariadb-client-2-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.261993 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="57797edf-e9d7-45f0-809c-05439b7a53a7" containerName="mariadb-client-2-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.262225 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="57797edf-e9d7-45f0-809c-05439b7a53a7" containerName="mariadb-client-2-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.265727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.270713 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j4kn\" (UniqueName: \"kubernetes.io/projected/57797edf-e9d7-45f0-809c-05439b7a53a7-kube-api-access-5j4kn\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.276421 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-3-default"] Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.372143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trld\" (UniqueName: \"kubernetes.io/projected/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6-kube-api-access-7trld\") pod \"mariadb-client-3-default\" (UID: \"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6\") " pod="openstack/mariadb-client-3-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.473641 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trld\" (UniqueName: \"kubernetes.io/projected/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6-kube-api-access-7trld\") pod \"mariadb-client-3-default\" (UID: \"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6\") " pod="openstack/mariadb-client-3-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.497863 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trld\" (UniqueName: \"kubernetes.io/projected/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6-kube-api-access-7trld\") pod \"mariadb-client-3-default\" (UID: \"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6\") " pod="openstack/mariadb-client-3-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.548588 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57797edf-e9d7-45f0-809c-05439b7a53a7" path="/var/lib/kubelet/pods/57797edf-e9d7-45f0-809c-05439b7a53a7/volumes" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.594369 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.737062 4858 scope.go:117] "RemoveContainer" containerID="b30277866647cc8c57098c572b863222023714d75cf69ba1112e5f514c42ddfc" Nov 22 09:09:17 crc kubenswrapper[4858]: I1122 09:09:17.737744 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 09:09:18 crc kubenswrapper[4858]: I1122 09:09:18.155964 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-3-default"] Nov 22 09:09:18 crc kubenswrapper[4858]: I1122 09:09:18.748752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-3-default" event={"ID":"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6","Type":"ContainerStarted","Data":"164b1084f9d893279d0688ad876eabfc6c6fa0f76b38ac5eb369c74ee39873de"} Nov 22 09:09:18 crc kubenswrapper[4858]: I1122 09:09:18.749092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-3-default" event={"ID":"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6","Type":"ContainerStarted","Data":"b2c433caac4db62be8224660e1d00cd6dccee7f4483768fd9eeeeb027e711010"} Nov 22 09:09:18 crc kubenswrapper[4858]: I1122 09:09:18.771899 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-3-default" podStartSLOduration=1.771856788 podStartE2EDuration="1.771856788s" podCreationTimestamp="2025-11-22 09:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:09:18.765084171 +0000 UTC m=+7120.606507177" watchObservedRunningTime="2025-11-22 09:09:18.771856788 +0000 UTC m=+7120.613279794" Nov 22 09:09:21 crc kubenswrapper[4858]: I1122 09:09:21.792189 4858 generic.go:334] "Generic (PLEG): container finished" podID="59e3c4a0-22ce-4370-9340-73ba6f1d9fe6" containerID="164b1084f9d893279d0688ad876eabfc6c6fa0f76b38ac5eb369c74ee39873de" exitCode=0 Nov 22 09:09:21 crc kubenswrapper[4858]: I1122 09:09:21.792253 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-3-default" event={"ID":"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6","Type":"ContainerDied","Data":"164b1084f9d893279d0688ad876eabfc6c6fa0f76b38ac5eb369c74ee39873de"} Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.186866 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.229824 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-3-default"] Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.236501 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-3-default"] Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.285289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7trld\" (UniqueName: \"kubernetes.io/projected/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6-kube-api-access-7trld\") pod \"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6\" (UID: \"59e3c4a0-22ce-4370-9340-73ba6f1d9fe6\") " Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.293052 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6-kube-api-access-7trld" (OuterVolumeSpecName: "kube-api-access-7trld") pod "59e3c4a0-22ce-4370-9340-73ba6f1d9fe6" (UID: "59e3c4a0-22ce-4370-9340-73ba6f1d9fe6"). InnerVolumeSpecName "kube-api-access-7trld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.387735 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7trld\" (UniqueName: \"kubernetes.io/projected/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6-kube-api-access-7trld\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.549798 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e3c4a0-22ce-4370-9340-73ba6f1d9fe6" path="/var/lib/kubelet/pods/59e3c4a0-22ce-4370-9340-73ba6f1d9fe6/volumes" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.762373 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 22 09:09:23 crc kubenswrapper[4858]: E1122 09:09:23.763146 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3c4a0-22ce-4370-9340-73ba6f1d9fe6" containerName="mariadb-client-3-default" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.763197 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3c4a0-22ce-4370-9340-73ba6f1d9fe6" containerName="mariadb-client-3-default" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.763579 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e3c4a0-22ce-4370-9340-73ba6f1d9fe6" containerName="mariadb-client-3-default" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.767567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.795045 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.813103 4858 scope.go:117] "RemoveContainer" containerID="164b1084f9d893279d0688ad876eabfc6c6fa0f76b38ac5eb369c74ee39873de" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.813590 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-3-default" Nov 22 09:09:23 crc kubenswrapper[4858]: I1122 09:09:23.898173 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrlr\" (UniqueName: \"kubernetes.io/projected/46229f23-79ed-441c-8ead-b62a135cb304-kube-api-access-mlrlr\") pod \"mariadb-client-1\" (UID: \"46229f23-79ed-441c-8ead-b62a135cb304\") " pod="openstack/mariadb-client-1" Nov 22 09:09:24 crc kubenswrapper[4858]: I1122 09:09:23.999871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrlr\" (UniqueName: \"kubernetes.io/projected/46229f23-79ed-441c-8ead-b62a135cb304-kube-api-access-mlrlr\") pod \"mariadb-client-1\" (UID: \"46229f23-79ed-441c-8ead-b62a135cb304\") " pod="openstack/mariadb-client-1" Nov 22 09:09:24 crc kubenswrapper[4858]: I1122 09:09:24.017485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrlr\" (UniqueName: \"kubernetes.io/projected/46229f23-79ed-441c-8ead-b62a135cb304-kube-api-access-mlrlr\") pod \"mariadb-client-1\" (UID: \"46229f23-79ed-441c-8ead-b62a135cb304\") " pod="openstack/mariadb-client-1" Nov 22 09:09:24 crc kubenswrapper[4858]: I1122 09:09:24.090428 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 09:09:24 crc kubenswrapper[4858]: I1122 09:09:24.643412 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 09:09:24 crc kubenswrapper[4858]: I1122 09:09:24.822660 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"46229f23-79ed-441c-8ead-b62a135cb304","Type":"ContainerStarted","Data":"448600da6ff5d822376892865305f16f2dfc6b385d331127a8bfb54da6ea85dd"} Nov 22 09:09:25 crc kubenswrapper[4858]: I1122 09:09:25.832625 4858 generic.go:334] "Generic (PLEG): container finished" podID="46229f23-79ed-441c-8ead-b62a135cb304" containerID="d6fdb599fd9d248746e23a879a276e18b566b289ffeb001d6872a408f4ab5307" exitCode=0 Nov 22 09:09:25 crc kubenswrapper[4858]: I1122 09:09:25.832689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"46229f23-79ed-441c-8ead-b62a135cb304","Type":"ContainerDied","Data":"d6fdb599fd9d248746e23a879a276e18b566b289ffeb001d6872a408f4ab5307"} Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.265890 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.284769 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_46229f23-79ed-441c-8ead-b62a135cb304/mariadb-client-1/0.log" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.311420 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.317262 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.356237 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlrlr\" (UniqueName: \"kubernetes.io/projected/46229f23-79ed-441c-8ead-b62a135cb304-kube-api-access-mlrlr\") pod \"46229f23-79ed-441c-8ead-b62a135cb304\" (UID: \"46229f23-79ed-441c-8ead-b62a135cb304\") " Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.362741 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46229f23-79ed-441c-8ead-b62a135cb304-kube-api-access-mlrlr" (OuterVolumeSpecName: "kube-api-access-mlrlr") pod "46229f23-79ed-441c-8ead-b62a135cb304" (UID: "46229f23-79ed-441c-8ead-b62a135cb304"). InnerVolumeSpecName "kube-api-access-mlrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.459247 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlrlr\" (UniqueName: \"kubernetes.io/projected/46229f23-79ed-441c-8ead-b62a135cb304-kube-api-access-mlrlr\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.545934 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46229f23-79ed-441c-8ead-b62a135cb304" path="/var/lib/kubelet/pods/46229f23-79ed-441c-8ead-b62a135cb304/volumes" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.805235 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 09:09:27 crc kubenswrapper[4858]: E1122 09:09:27.805921 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46229f23-79ed-441c-8ead-b62a135cb304" containerName="mariadb-client-1" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.805961 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="46229f23-79ed-441c-8ead-b62a135cb304" containerName="mariadb-client-1" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.806271 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="46229f23-79ed-441c-8ead-b62a135cb304" containerName="mariadb-client-1" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.807292 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.818249 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.855293 4858 scope.go:117] "RemoveContainer" containerID="d6fdb599fd9d248746e23a879a276e18b566b289ffeb001d6872a408f4ab5307" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.855435 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.864769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qql\" (UniqueName: \"kubernetes.io/projected/f62fc540-052e-45cc-8afb-881ab1ecc336-kube-api-access-n4qql\") pod \"mariadb-client-4-default\" (UID: \"f62fc540-052e-45cc-8afb-881ab1ecc336\") " pod="openstack/mariadb-client-4-default" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.966120 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qql\" (UniqueName: \"kubernetes.io/projected/f62fc540-052e-45cc-8afb-881ab1ecc336-kube-api-access-n4qql\") pod \"mariadb-client-4-default\" (UID: \"f62fc540-052e-45cc-8afb-881ab1ecc336\") " pod="openstack/mariadb-client-4-default" Nov 22 09:09:27 crc kubenswrapper[4858]: I1122 09:09:27.983474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qql\" (UniqueName: \"kubernetes.io/projected/f62fc540-052e-45cc-8afb-881ab1ecc336-kube-api-access-n4qql\") pod \"mariadb-client-4-default\" (UID: \"f62fc540-052e-45cc-8afb-881ab1ecc336\") " pod="openstack/mariadb-client-4-default" Nov 22 09:09:28 crc kubenswrapper[4858]: I1122 09:09:28.132444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 09:09:28 crc kubenswrapper[4858]: I1122 09:09:28.455618 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 09:09:28 crc kubenswrapper[4858]: W1122 09:09:28.460988 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf62fc540_052e_45cc_8afb_881ab1ecc336.slice/crio-b72728f80a54c0476c8b00287a4656538eff6b3d0df15f2884c97135e949e5f5 WatchSource:0}: Error finding container b72728f80a54c0476c8b00287a4656538eff6b3d0df15f2884c97135e949e5f5: Status 404 returned error can't find the container with id b72728f80a54c0476c8b00287a4656538eff6b3d0df15f2884c97135e949e5f5 Nov 22 09:09:28 crc kubenswrapper[4858]: I1122 09:09:28.869478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f62fc540-052e-45cc-8afb-881ab1ecc336","Type":"ContainerStarted","Data":"b72728f80a54c0476c8b00287a4656538eff6b3d0df15f2884c97135e949e5f5"} Nov 22 09:09:29 crc kubenswrapper[4858]: I1122 09:09:29.878343 4858 generic.go:334] "Generic (PLEG): container finished" podID="f62fc540-052e-45cc-8afb-881ab1ecc336" containerID="e8468816957bc7da95d26e2531d20b1120685fa73d6c6d198e1b65c7d4c8e643" exitCode=0 Nov 22 09:09:29 crc kubenswrapper[4858]: I1122 09:09:29.878388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f62fc540-052e-45cc-8afb-881ab1ecc336","Type":"ContainerDied","Data":"e8468816957bc7da95d26e2531d20b1120685fa73d6c6d198e1b65c7d4c8e643"} Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.257354 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.279101 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_f62fc540-052e-45cc-8afb-881ab1ecc336/mariadb-client-4-default/0.log" Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.306842 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.316094 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.423734 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4qql\" (UniqueName: \"kubernetes.io/projected/f62fc540-052e-45cc-8afb-881ab1ecc336-kube-api-access-n4qql\") pod \"f62fc540-052e-45cc-8afb-881ab1ecc336\" (UID: \"f62fc540-052e-45cc-8afb-881ab1ecc336\") " Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.432127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f62fc540-052e-45cc-8afb-881ab1ecc336-kube-api-access-n4qql" (OuterVolumeSpecName: "kube-api-access-n4qql") pod "f62fc540-052e-45cc-8afb-881ab1ecc336" (UID: "f62fc540-052e-45cc-8afb-881ab1ecc336"). InnerVolumeSpecName "kube-api-access-n4qql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.525513 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4qql\" (UniqueName: \"kubernetes.io/projected/f62fc540-052e-45cc-8afb-881ab1ecc336-kube-api-access-n4qql\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.544992 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f62fc540-052e-45cc-8afb-881ab1ecc336" path="/var/lib/kubelet/pods/f62fc540-052e-45cc-8afb-881ab1ecc336/volumes" Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.901803 4858 scope.go:117] "RemoveContainer" containerID="e8468816957bc7da95d26e2531d20b1120685fa73d6c6d198e1b65c7d4c8e643" Nov 22 09:09:31 crc kubenswrapper[4858]: I1122 09:09:31.901830 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.133558 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 09:09:35 crc kubenswrapper[4858]: E1122 09:09:35.134477 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62fc540-052e-45cc-8afb-881ab1ecc336" containerName="mariadb-client-4-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.134492 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62fc540-052e-45cc-8afb-881ab1ecc336" containerName="mariadb-client-4-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.134638 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62fc540-052e-45cc-8afb-881ab1ecc336" containerName="mariadb-client-4-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.135208 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.138883 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rphst" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.146529 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.207817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqh5l\" (UniqueName: \"kubernetes.io/projected/1a65ce59-d1e7-4b40-af53-14081345448e-kube-api-access-pqh5l\") pod \"mariadb-client-5-default\" (UID: \"1a65ce59-d1e7-4b40-af53-14081345448e\") " pod="openstack/mariadb-client-5-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.309240 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqh5l\" (UniqueName: \"kubernetes.io/projected/1a65ce59-d1e7-4b40-af53-14081345448e-kube-api-access-pqh5l\") pod \"mariadb-client-5-default\" (UID: \"1a65ce59-d1e7-4b40-af53-14081345448e\") " pod="openstack/mariadb-client-5-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.337356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqh5l\" (UniqueName: \"kubernetes.io/projected/1a65ce59-d1e7-4b40-af53-14081345448e-kube-api-access-pqh5l\") pod \"mariadb-client-5-default\" (UID: \"1a65ce59-d1e7-4b40-af53-14081345448e\") " pod="openstack/mariadb-client-5-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.463485 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 09:09:35 crc kubenswrapper[4858]: I1122 09:09:35.965266 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 09:09:36 crc kubenswrapper[4858]: I1122 09:09:36.944495 4858 generic.go:334] "Generic (PLEG): container finished" podID="1a65ce59-d1e7-4b40-af53-14081345448e" containerID="f5d450601fd441e351dde75610edd54c37bcc0b91f5afee1c0ac387aee2c9bbf" exitCode=0 Nov 22 09:09:36 crc kubenswrapper[4858]: I1122 09:09:36.944576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"1a65ce59-d1e7-4b40-af53-14081345448e","Type":"ContainerDied","Data":"f5d450601fd441e351dde75610edd54c37bcc0b91f5afee1c0ac387aee2c9bbf"} Nov 22 09:09:36 crc kubenswrapper[4858]: I1122 09:09:36.944862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"1a65ce59-d1e7-4b40-af53-14081345448e","Type":"ContainerStarted","Data":"f20f5bb0ea39c87a856afc7a600f5ea92e41e85a7473f748ed769de2bbc2cada"} Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.336374 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.356563 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_1a65ce59-d1e7-4b40-af53-14081345448e/mariadb-client-5-default/0.log" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.358532 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqh5l\" (UniqueName: \"kubernetes.io/projected/1a65ce59-d1e7-4b40-af53-14081345448e-kube-api-access-pqh5l\") pod \"1a65ce59-d1e7-4b40-af53-14081345448e\" (UID: \"1a65ce59-d1e7-4b40-af53-14081345448e\") " Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.369738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a65ce59-d1e7-4b40-af53-14081345448e-kube-api-access-pqh5l" (OuterVolumeSpecName: "kube-api-access-pqh5l") pod "1a65ce59-d1e7-4b40-af53-14081345448e" (UID: "1a65ce59-d1e7-4b40-af53-14081345448e"). InnerVolumeSpecName "kube-api-access-pqh5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.380045 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.389590 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.460697 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqh5l\" (UniqueName: \"kubernetes.io/projected/1a65ce59-d1e7-4b40-af53-14081345448e-kube-api-access-pqh5l\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.510105 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 09:09:38 crc kubenswrapper[4858]: E1122 09:09:38.510431 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a65ce59-d1e7-4b40-af53-14081345448e" containerName="mariadb-client-5-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.510445 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a65ce59-d1e7-4b40-af53-14081345448e" containerName="mariadb-client-5-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.510620 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a65ce59-d1e7-4b40-af53-14081345448e" containerName="mariadb-client-5-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.511145 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.526837 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.664199 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2dk\" (UniqueName: \"kubernetes.io/projected/8a54f485-9cff-4568-b950-2775f4e44b21-kube-api-access-dk2dk\") pod \"mariadb-client-6-default\" (UID: \"8a54f485-9cff-4568-b950-2775f4e44b21\") " pod="openstack/mariadb-client-6-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.766365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk2dk\" (UniqueName: \"kubernetes.io/projected/8a54f485-9cff-4568-b950-2775f4e44b21-kube-api-access-dk2dk\") pod \"mariadb-client-6-default\" (UID: \"8a54f485-9cff-4568-b950-2775f4e44b21\") " pod="openstack/mariadb-client-6-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.788821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk2dk\" (UniqueName: \"kubernetes.io/projected/8a54f485-9cff-4568-b950-2775f4e44b21-kube-api-access-dk2dk\") pod \"mariadb-client-6-default\" (UID: \"8a54f485-9cff-4568-b950-2775f4e44b21\") " pod="openstack/mariadb-client-6-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.830638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.966044 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f20f5bb0ea39c87a856afc7a600f5ea92e41e85a7473f748ed769de2bbc2cada" Nov 22 09:09:38 crc kubenswrapper[4858]: I1122 09:09:38.966117 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 09:09:39 crc kubenswrapper[4858]: I1122 09:09:39.343433 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 09:09:39 crc kubenswrapper[4858]: I1122 09:09:39.551394 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a65ce59-d1e7-4b40-af53-14081345448e" path="/var/lib/kubelet/pods/1a65ce59-d1e7-4b40-af53-14081345448e/volumes" Nov 22 09:09:41 crc kubenswrapper[4858]: I1122 09:09:39.977395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"8a54f485-9cff-4568-b950-2775f4e44b21","Type":"ContainerStarted","Data":"7f6d71f0d8f6cb4ac9f8eb57ab5caf6f08d31aa9a2491b79d14bcbed9c16c4da"} Nov 22 09:09:41 crc kubenswrapper[4858]: I1122 09:09:39.977788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"8a54f485-9cff-4568-b950-2775f4e44b21","Type":"ContainerStarted","Data":"2b8e1284e2e0bbafd2b0479d8c8630ff45fd202710fec156222064a223922f85"} Nov 22 09:09:41 crc kubenswrapper[4858]: I1122 09:09:40.986015 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a54f485-9cff-4568-b950-2775f4e44b21" containerID="7f6d71f0d8f6cb4ac9f8eb57ab5caf6f08d31aa9a2491b79d14bcbed9c16c4da" exitCode=1 Nov 22 09:09:41 crc kubenswrapper[4858]: I1122 09:09:40.986080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"8a54f485-9cff-4568-b950-2775f4e44b21","Type":"ContainerDied","Data":"7f6d71f0d8f6cb4ac9f8eb57ab5caf6f08d31aa9a2491b79d14bcbed9c16c4da"} Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.381255 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.408466 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_8a54f485-9cff-4568-b950-2775f4e44b21/mariadb-client-6-default/0.log" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.444525 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.456194 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.460795 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk2dk\" (UniqueName: \"kubernetes.io/projected/8a54f485-9cff-4568-b950-2775f4e44b21-kube-api-access-dk2dk\") pod \"8a54f485-9cff-4568-b950-2775f4e44b21\" (UID: \"8a54f485-9cff-4568-b950-2775f4e44b21\") " Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.467925 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a54f485-9cff-4568-b950-2775f4e44b21-kube-api-access-dk2dk" (OuterVolumeSpecName: "kube-api-access-dk2dk") pod "8a54f485-9cff-4568-b950-2775f4e44b21" (UID: "8a54f485-9cff-4568-b950-2775f4e44b21"). InnerVolumeSpecName "kube-api-access-dk2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.564392 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk2dk\" (UniqueName: \"kubernetes.io/projected/8a54f485-9cff-4568-b950-2775f4e44b21-kube-api-access-dk2dk\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.577661 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 09:09:42 crc kubenswrapper[4858]: E1122 09:09:42.578201 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a54f485-9cff-4568-b950-2775f4e44b21" containerName="mariadb-client-6-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.578225 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a54f485-9cff-4568-b950-2775f4e44b21" containerName="mariadb-client-6-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.578439 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a54f485-9cff-4568-b950-2775f4e44b21" containerName="mariadb-client-6-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.579233 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.585562 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.665438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnbw\" (UniqueName: \"kubernetes.io/projected/d3d0f7cc-d97b-4d4d-beba-3eb29778146d-kube-api-access-rlnbw\") pod \"mariadb-client-7-default\" (UID: \"d3d0f7cc-d97b-4d4d-beba-3eb29778146d\") " pod="openstack/mariadb-client-7-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.766337 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnbw\" (UniqueName: \"kubernetes.io/projected/d3d0f7cc-d97b-4d4d-beba-3eb29778146d-kube-api-access-rlnbw\") pod \"mariadb-client-7-default\" (UID: \"d3d0f7cc-d97b-4d4d-beba-3eb29778146d\") " pod="openstack/mariadb-client-7-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.787894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnbw\" (UniqueName: \"kubernetes.io/projected/d3d0f7cc-d97b-4d4d-beba-3eb29778146d-kube-api-access-rlnbw\") pod \"mariadb-client-7-default\" (UID: \"d3d0f7cc-d97b-4d4d-beba-3eb29778146d\") " pod="openstack/mariadb-client-7-default" Nov 22 09:09:42 crc kubenswrapper[4858]: I1122 09:09:42.905829 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 09:09:43 crc kubenswrapper[4858]: I1122 09:09:43.015416 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b8e1284e2e0bbafd2b0479d8c8630ff45fd202710fec156222064a223922f85" Nov 22 09:09:43 crc kubenswrapper[4858]: I1122 09:09:43.015549 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 09:09:43 crc kubenswrapper[4858]: I1122 09:09:43.424096 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 09:09:43 crc kubenswrapper[4858]: W1122 09:09:43.430286 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d0f7cc_d97b_4d4d_beba_3eb29778146d.slice/crio-bcbf9a279553b303d0d8a57736f759ff5cb487fd8b29477384452ea94936ad5b WatchSource:0}: Error finding container bcbf9a279553b303d0d8a57736f759ff5cb487fd8b29477384452ea94936ad5b: Status 404 returned error can't find the container with id bcbf9a279553b303d0d8a57736f759ff5cb487fd8b29477384452ea94936ad5b Nov 22 09:09:43 crc kubenswrapper[4858]: I1122 09:09:43.547561 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a54f485-9cff-4568-b950-2775f4e44b21" path="/var/lib/kubelet/pods/8a54f485-9cff-4568-b950-2775f4e44b21/volumes" Nov 22 09:09:44 crc kubenswrapper[4858]: I1122 09:09:44.022880 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3d0f7cc-d97b-4d4d-beba-3eb29778146d" containerID="f8eaae015c69aa8d47e3c967461bc550d64a7444a3d1e78b1b5636e9e7bd4a3e" exitCode=0 Nov 22 09:09:44 crc kubenswrapper[4858]: I1122 09:09:44.022929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"d3d0f7cc-d97b-4d4d-beba-3eb29778146d","Type":"ContainerDied","Data":"f8eaae015c69aa8d47e3c967461bc550d64a7444a3d1e78b1b5636e9e7bd4a3e"} Nov 22 09:09:44 crc kubenswrapper[4858]: I1122 09:09:44.022959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"d3d0f7cc-d97b-4d4d-beba-3eb29778146d","Type":"ContainerStarted","Data":"bcbf9a279553b303d0d8a57736f759ff5cb487fd8b29477384452ea94936ad5b"} Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.427285 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.449208 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_d3d0f7cc-d97b-4d4d-beba-3eb29778146d/mariadb-client-7-default/0.log" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.477783 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.483708 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.612507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlnbw\" (UniqueName: \"kubernetes.io/projected/d3d0f7cc-d97b-4d4d-beba-3eb29778146d-kube-api-access-rlnbw\") pod \"d3d0f7cc-d97b-4d4d-beba-3eb29778146d\" (UID: \"d3d0f7cc-d97b-4d4d-beba-3eb29778146d\") " Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.625667 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d0f7cc-d97b-4d4d-beba-3eb29778146d-kube-api-access-rlnbw" (OuterVolumeSpecName: "kube-api-access-rlnbw") pod "d3d0f7cc-d97b-4d4d-beba-3eb29778146d" (UID: "d3d0f7cc-d97b-4d4d-beba-3eb29778146d"). InnerVolumeSpecName "kube-api-access-rlnbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.633069 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 22 09:09:45 crc kubenswrapper[4858]: E1122 09:09:45.633714 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d0f7cc-d97b-4d4d-beba-3eb29778146d" containerName="mariadb-client-7-default" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.633742 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d0f7cc-d97b-4d4d-beba-3eb29778146d" containerName="mariadb-client-7-default" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.633943 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d0f7cc-d97b-4d4d-beba-3eb29778146d" containerName="mariadb-client-7-default" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.634772 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.641131 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.715489 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlnbw\" (UniqueName: \"kubernetes.io/projected/d3d0f7cc-d97b-4d4d-beba-3eb29778146d-kube-api-access-rlnbw\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.816922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7jlz\" (UniqueName: \"kubernetes.io/projected/9f51c401-8450-43b3-a2ad-fd17ff56348c-kube-api-access-x7jlz\") pod \"mariadb-client-2\" (UID: \"9f51c401-8450-43b3-a2ad-fd17ff56348c\") " pod="openstack/mariadb-client-2" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.919093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7jlz\" (UniqueName: \"kubernetes.io/projected/9f51c401-8450-43b3-a2ad-fd17ff56348c-kube-api-access-x7jlz\") pod \"mariadb-client-2\" (UID: \"9f51c401-8450-43b3-a2ad-fd17ff56348c\") " pod="openstack/mariadb-client-2" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.943879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7jlz\" (UniqueName: \"kubernetes.io/projected/9f51c401-8450-43b3-a2ad-fd17ff56348c-kube-api-access-x7jlz\") pod \"mariadb-client-2\" (UID: \"9f51c401-8450-43b3-a2ad-fd17ff56348c\") " pod="openstack/mariadb-client-2" Nov 22 09:09:45 crc kubenswrapper[4858]: I1122 09:09:45.970065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 09:09:46 crc kubenswrapper[4858]: I1122 09:09:46.047513 4858 scope.go:117] "RemoveContainer" containerID="f8eaae015c69aa8d47e3c967461bc550d64a7444a3d1e78b1b5636e9e7bd4a3e" Nov 22 09:09:46 crc kubenswrapper[4858]: I1122 09:09:46.047683 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 09:09:46 crc kubenswrapper[4858]: I1122 09:09:46.506174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 09:09:46 crc kubenswrapper[4858]: W1122 09:09:46.513427 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f51c401_8450_43b3_a2ad_fd17ff56348c.slice/crio-a8721d49bf54f72619e5a1bc2070c68e3e9eaa1652192e06ae71f603ed39a3c5 WatchSource:0}: Error finding container a8721d49bf54f72619e5a1bc2070c68e3e9eaa1652192e06ae71f603ed39a3c5: Status 404 returned error can't find the container with id a8721d49bf54f72619e5a1bc2070c68e3e9eaa1652192e06ae71f603ed39a3c5 Nov 22 09:09:47 crc kubenswrapper[4858]: I1122 09:09:47.059340 4858 generic.go:334] "Generic (PLEG): container finished" podID="9f51c401-8450-43b3-a2ad-fd17ff56348c" containerID="a0e5c83d1bdf0033b29b707437ad6ffc136e9d43c2ff58fb7eaa89f80c8380a4" exitCode=0 Nov 22 09:09:47 crc kubenswrapper[4858]: I1122 09:09:47.059393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"9f51c401-8450-43b3-a2ad-fd17ff56348c","Type":"ContainerDied","Data":"a0e5c83d1bdf0033b29b707437ad6ffc136e9d43c2ff58fb7eaa89f80c8380a4"} Nov 22 09:09:47 crc kubenswrapper[4858]: I1122 09:09:47.059624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"9f51c401-8450-43b3-a2ad-fd17ff56348c","Type":"ContainerStarted","Data":"a8721d49bf54f72619e5a1bc2070c68e3e9eaa1652192e06ae71f603ed39a3c5"} Nov 22 09:09:47 crc kubenswrapper[4858]: I1122 09:09:47.548875 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d0f7cc-d97b-4d4d-beba-3eb29778146d" path="/var/lib/kubelet/pods/d3d0f7cc-d97b-4d4d-beba-3eb29778146d/volumes" Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.434576 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.488857 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7jlz\" (UniqueName: \"kubernetes.io/projected/9f51c401-8450-43b3-a2ad-fd17ff56348c-kube-api-access-x7jlz\") pod \"9f51c401-8450-43b3-a2ad-fd17ff56348c\" (UID: \"9f51c401-8450-43b3-a2ad-fd17ff56348c\") " Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.488913 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_9f51c401-8450-43b3-a2ad-fd17ff56348c/mariadb-client-2/0.log" Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.502954 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f51c401-8450-43b3-a2ad-fd17ff56348c-kube-api-access-x7jlz" (OuterVolumeSpecName: "kube-api-access-x7jlz") pod "9f51c401-8450-43b3-a2ad-fd17ff56348c" (UID: "9f51c401-8450-43b3-a2ad-fd17ff56348c"). InnerVolumeSpecName "kube-api-access-x7jlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.523235 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.529525 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 09:09:48 crc kubenswrapper[4858]: I1122 09:09:48.593238 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7jlz\" (UniqueName: \"kubernetes.io/projected/9f51c401-8450-43b3-a2ad-fd17ff56348c-kube-api-access-x7jlz\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:49 crc kubenswrapper[4858]: I1122 09:09:49.075935 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8721d49bf54f72619e5a1bc2070c68e3e9eaa1652192e06ae71f603ed39a3c5" Nov 22 09:09:49 crc kubenswrapper[4858]: I1122 09:09:49.075968 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 09:09:49 crc kubenswrapper[4858]: I1122 09:09:49.553745 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f51c401-8450-43b3-a2ad-fd17ff56348c" path="/var/lib/kubelet/pods/9f51c401-8450-43b3-a2ad-fd17ff56348c/volumes" Nov 22 09:10:33 crc kubenswrapper[4858]: I1122 09:10:33.805052 4858 scope.go:117] "RemoveContainer" containerID="032cc615925a90436ca327c0e66e7dc900bd185b7377ef4fdc8c17b514a0eb43" Nov 22 09:10:45 crc kubenswrapper[4858]: I1122 09:10:45.312255 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:10:45 crc kubenswrapper[4858]: I1122 09:10:45.313004 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:11:15 crc kubenswrapper[4858]: I1122 09:11:15.312404 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:11:15 crc kubenswrapper[4858]: I1122 09:11:15.313051 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:11:33 crc kubenswrapper[4858]: I1122 09:11:33.870027 4858 scope.go:117] "RemoveContainer" containerID="64cc09b78f44d8821336dc8dd392d115f9d7609cfa9e9c281043c030f4f8335a" Nov 22 09:11:33 crc kubenswrapper[4858]: I1122 09:11:33.891017 4858 scope.go:117] "RemoveContainer" containerID="e01d77ab7454fc86512385d1537f266faba7aff8f5819bdb192128bddd083ada" Nov 22 09:11:33 crc kubenswrapper[4858]: I1122 09:11:33.912633 4858 scope.go:117] "RemoveContainer" containerID="dee622070b5a4a0d32f089dbdddb0ef88ee2cb854aa320ef0315952bd6e9ff4e" Nov 22 09:11:45 crc kubenswrapper[4858]: I1122 09:11:45.311638 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:11:45 crc kubenswrapper[4858]: I1122 09:11:45.312268 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:11:45 crc kubenswrapper[4858]: I1122 09:11:45.312313 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:11:45 crc kubenswrapper[4858]: I1122 09:11:45.313079 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:11:45 crc kubenswrapper[4858]: I1122 09:11:45.313134 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" gracePeriod=600 Nov 22 09:11:45 crc kubenswrapper[4858]: E1122 09:11:45.439913 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:11:45 crc kubenswrapper[4858]: E1122 09:11:45.471695 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac3f217_ad73_4e89_b703_b42a3c6c9ed4.slice/crio-conmon-79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac3f217_ad73_4e89_b703_b42a3c6c9ed4.slice/crio-79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:11:46 crc kubenswrapper[4858]: I1122 09:11:46.052264 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" exitCode=0 Nov 22 09:11:46 crc kubenswrapper[4858]: I1122 09:11:46.052439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f"} Nov 22 09:11:46 crc kubenswrapper[4858]: I1122 09:11:46.052530 4858 scope.go:117] "RemoveContainer" containerID="d3a5f20f89a78131083b1d0f65c8f1a491a734a067a67b704012c5563057b454" Nov 22 09:11:46 crc kubenswrapper[4858]: I1122 09:11:46.053391 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:11:46 crc kubenswrapper[4858]: E1122 09:11:46.053707 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:12:00 crc kubenswrapper[4858]: I1122 09:12:00.535760 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:12:00 crc kubenswrapper[4858]: E1122 09:12:00.536576 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:12:13 crc kubenswrapper[4858]: I1122 09:12:13.536605 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:12:13 crc kubenswrapper[4858]: E1122 09:12:13.537679 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:12:27 crc kubenswrapper[4858]: I1122 09:12:27.536226 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:12:27 crc kubenswrapper[4858]: E1122 09:12:27.537097 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:12:38 crc kubenswrapper[4858]: I1122 09:12:38.535902 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:12:38 crc kubenswrapper[4858]: E1122 09:12:38.536865 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:12:52 crc kubenswrapper[4858]: I1122 09:12:52.535505 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:12:52 crc kubenswrapper[4858]: E1122 09:12:52.536227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:13:04 crc kubenswrapper[4858]: I1122 09:13:04.535752 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:13:04 crc kubenswrapper[4858]: E1122 09:13:04.536857 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:13:19 crc kubenswrapper[4858]: I1122 09:13:19.539513 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:13:19 crc kubenswrapper[4858]: E1122 09:13:19.541736 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:13:32 crc kubenswrapper[4858]: I1122 09:13:32.535839 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:13:32 crc kubenswrapper[4858]: E1122 09:13:32.536824 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:13:45 crc kubenswrapper[4858]: I1122 09:13:45.535609 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:13:45 crc kubenswrapper[4858]: E1122 09:13:45.536393 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:13:59 crc kubenswrapper[4858]: I1122 09:13:59.542201 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:13:59 crc kubenswrapper[4858]: E1122 09:13:59.543475 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.661068 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7z547"] Nov 22 09:14:07 crc kubenswrapper[4858]: E1122 09:14:07.661993 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f51c401-8450-43b3-a2ad-fd17ff56348c" containerName="mariadb-client-2" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.662012 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f51c401-8450-43b3-a2ad-fd17ff56348c" containerName="mariadb-client-2" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.662179 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f51c401-8450-43b3-a2ad-fd17ff56348c" containerName="mariadb-client-2" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.663563 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.683271 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7z547"] Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.762816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-catalog-content\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.762887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-utilities\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.762945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wrq\" (UniqueName: \"kubernetes.io/projected/6ce388b7-2e37-4eda-be30-0632ca5d16f6-kube-api-access-r7wrq\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.864044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-utilities\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.864143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wrq\" (UniqueName: \"kubernetes.io/projected/6ce388b7-2e37-4eda-be30-0632ca5d16f6-kube-api-access-r7wrq\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.864188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-catalog-content\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.864713 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-catalog-content\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.864819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-utilities\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.882753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wrq\" (UniqueName: \"kubernetes.io/projected/6ce388b7-2e37-4eda-be30-0632ca5d16f6-kube-api-access-r7wrq\") pod \"certified-operators-7z547\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:07 crc kubenswrapper[4858]: I1122 09:14:07.985005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:08 crc kubenswrapper[4858]: I1122 09:14:08.503003 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7z547"] Nov 22 09:14:09 crc kubenswrapper[4858]: I1122 09:14:09.091060 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerID="47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08" exitCode=0 Nov 22 09:14:09 crc kubenswrapper[4858]: I1122 09:14:09.091161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7z547" event={"ID":"6ce388b7-2e37-4eda-be30-0632ca5d16f6","Type":"ContainerDied","Data":"47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08"} Nov 22 09:14:09 crc kubenswrapper[4858]: I1122 09:14:09.091377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7z547" event={"ID":"6ce388b7-2e37-4eda-be30-0632ca5d16f6","Type":"ContainerStarted","Data":"26d5b1db45ade79b8082fdd9c36e5e7d6751c218c110937f53cf10097f280616"} Nov 22 09:14:09 crc kubenswrapper[4858]: I1122 09:14:09.094573 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:14:11 crc kubenswrapper[4858]: I1122 09:14:11.109963 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerID="eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846" exitCode=0 Nov 22 09:14:11 crc kubenswrapper[4858]: I1122 09:14:11.110192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7z547" event={"ID":"6ce388b7-2e37-4eda-be30-0632ca5d16f6","Type":"ContainerDied","Data":"eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846"} Nov 22 09:14:12 crc kubenswrapper[4858]: I1122 09:14:12.121723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7z547" event={"ID":"6ce388b7-2e37-4eda-be30-0632ca5d16f6","Type":"ContainerStarted","Data":"8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929"} Nov 22 09:14:12 crc kubenswrapper[4858]: I1122 09:14:12.140894 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7z547" podStartSLOduration=2.550673318 podStartE2EDuration="5.140873041s" podCreationTimestamp="2025-11-22 09:14:07 +0000 UTC" firstStartedPulling="2025-11-22 09:14:09.094173917 +0000 UTC m=+7410.935596933" lastFinishedPulling="2025-11-22 09:14:11.68437365 +0000 UTC m=+7413.525796656" observedRunningTime="2025-11-22 09:14:12.136346686 +0000 UTC m=+7413.977769712" watchObservedRunningTime="2025-11-22 09:14:12.140873041 +0000 UTC m=+7413.982296057" Nov 22 09:14:13 crc kubenswrapper[4858]: I1122 09:14:13.536564 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:14:13 crc kubenswrapper[4858]: E1122 09:14:13.536972 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:14:17 crc kubenswrapper[4858]: I1122 09:14:17.985844 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:17 crc kubenswrapper[4858]: I1122 09:14:17.986343 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:18 crc kubenswrapper[4858]: I1122 09:14:18.028629 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:18 crc kubenswrapper[4858]: I1122 09:14:18.322517 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:18 crc kubenswrapper[4858]: I1122 09:14:18.371038 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7z547"] Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.297626 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7z547" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="registry-server" containerID="cri-o://8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929" gracePeriod=2 Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.716124 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.772936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-catalog-content\") pod \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.773084 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wrq\" (UniqueName: \"kubernetes.io/projected/6ce388b7-2e37-4eda-be30-0632ca5d16f6-kube-api-access-r7wrq\") pod \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.773181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-utilities\") pod \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\" (UID: \"6ce388b7-2e37-4eda-be30-0632ca5d16f6\") " Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.774080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-utilities" (OuterVolumeSpecName: "utilities") pod "6ce388b7-2e37-4eda-be30-0632ca5d16f6" (UID: "6ce388b7-2e37-4eda-be30-0632ca5d16f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.778897 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce388b7-2e37-4eda-be30-0632ca5d16f6-kube-api-access-r7wrq" (OuterVolumeSpecName: "kube-api-access-r7wrq") pod "6ce388b7-2e37-4eda-be30-0632ca5d16f6" (UID: "6ce388b7-2e37-4eda-be30-0632ca5d16f6"). InnerVolumeSpecName "kube-api-access-r7wrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.834947 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ce388b7-2e37-4eda-be30-0632ca5d16f6" (UID: "6ce388b7-2e37-4eda-be30-0632ca5d16f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.874606 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wrq\" (UniqueName: \"kubernetes.io/projected/6ce388b7-2e37-4eda-be30-0632ca5d16f6-kube-api-access-r7wrq\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.874650 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:20 crc kubenswrapper[4858]: I1122 09:14:20.874661 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce388b7-2e37-4eda-be30-0632ca5d16f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.306197 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerID="8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929" exitCode=0 Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.306254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7z547" event={"ID":"6ce388b7-2e37-4eda-be30-0632ca5d16f6","Type":"ContainerDied","Data":"8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929"} Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.306264 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7z547" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.306287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7z547" event={"ID":"6ce388b7-2e37-4eda-be30-0632ca5d16f6","Type":"ContainerDied","Data":"26d5b1db45ade79b8082fdd9c36e5e7d6751c218c110937f53cf10097f280616"} Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.306306 4858 scope.go:117] "RemoveContainer" containerID="8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.328775 4858 scope.go:117] "RemoveContainer" containerID="eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.340344 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7z547"] Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.346764 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7z547"] Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.372562 4858 scope.go:117] "RemoveContainer" containerID="47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.393554 4858 scope.go:117] "RemoveContainer" containerID="8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929" Nov 22 09:14:21 crc kubenswrapper[4858]: E1122 09:14:21.394145 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929\": container with ID starting with 8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929 not found: ID does not exist" containerID="8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.394202 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929"} err="failed to get container status \"8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929\": rpc error: code = NotFound desc = could not find container \"8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929\": container with ID starting with 8d433d988545f17610e677e0a9bc37b25f15816e479cea9fbfb59dcd0129b929 not found: ID does not exist" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.394234 4858 scope.go:117] "RemoveContainer" containerID="eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846" Nov 22 09:14:21 crc kubenswrapper[4858]: E1122 09:14:21.394711 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846\": container with ID starting with eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846 not found: ID does not exist" containerID="eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.394769 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846"} err="failed to get container status \"eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846\": rpc error: code = NotFound desc = could not find container \"eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846\": container with ID starting with eac0a65c31a492d656edce31545eb4e672af250b2c778dec1ec4a717cdfa8846 not found: ID does not exist" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.394805 4858 scope.go:117] "RemoveContainer" containerID="47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08" Nov 22 09:14:21 crc kubenswrapper[4858]: E1122 09:14:21.395343 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08\": container with ID starting with 47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08 not found: ID does not exist" containerID="47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.395369 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08"} err="failed to get container status \"47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08\": rpc error: code = NotFound desc = could not find container \"47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08\": container with ID starting with 47bf9469916978eb83bdb1a2e6c27f8775466ef3181d862b528e2b5338277b08 not found: ID does not exist" Nov 22 09:14:21 crc kubenswrapper[4858]: I1122 09:14:21.547115 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" path="/var/lib/kubelet/pods/6ce388b7-2e37-4eda-be30-0632ca5d16f6/volumes" Nov 22 09:14:24 crc kubenswrapper[4858]: I1122 09:14:24.535901 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:14:24 crc kubenswrapper[4858]: E1122 09:14:24.536722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:14:39 crc kubenswrapper[4858]: I1122 09:14:39.541543 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:14:39 crc kubenswrapper[4858]: E1122 09:14:39.542478 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:14:52 crc kubenswrapper[4858]: I1122 09:14:52.536011 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:14:52 crc kubenswrapper[4858]: E1122 09:14:52.536884 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.148481 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv"] Nov 22 09:15:00 crc kubenswrapper[4858]: E1122 09:15:00.149203 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="registry-server" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.149219 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="registry-server" Nov 22 09:15:00 crc kubenswrapper[4858]: E1122 09:15:00.149231 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="extract-content" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.149239 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="extract-content" Nov 22 09:15:00 crc kubenswrapper[4858]: E1122 09:15:00.149266 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="extract-utilities" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.149274 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="extract-utilities" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.149485 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce388b7-2e37-4eda-be30-0632ca5d16f6" containerName="registry-server" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.150112 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.154537 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.154554 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.162228 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv"] Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.300975 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-secret-volume\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.301536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-config-volume\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.301755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9jv\" (UniqueName: \"kubernetes.io/projected/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-kube-api-access-5x9jv\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.403055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9jv\" (UniqueName: \"kubernetes.io/projected/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-kube-api-access-5x9jv\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.403112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-secret-volume\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.403215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-config-volume\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.404301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-config-volume\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.417714 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-secret-volume\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.428232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9jv\" (UniqueName: \"kubernetes.io/projected/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-kube-api-access-5x9jv\") pod \"collect-profiles-29396715-4wkzv\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.483578 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:00 crc kubenswrapper[4858]: I1122 09:15:00.912033 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv"] Nov 22 09:15:01 crc kubenswrapper[4858]: I1122 09:15:01.622918 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ed03a6d-541b-4dde-91cf-887ee57ef0d2" containerID="cb6c48bbfe994bc371871511ba63e2ae9fe900d3e04b1c81f5fe757548fab88b" exitCode=0 Nov 22 09:15:01 crc kubenswrapper[4858]: I1122 09:15:01.622970 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" event={"ID":"9ed03a6d-541b-4dde-91cf-887ee57ef0d2","Type":"ContainerDied","Data":"cb6c48bbfe994bc371871511ba63e2ae9fe900d3e04b1c81f5fe757548fab88b"} Nov 22 09:15:01 crc kubenswrapper[4858]: I1122 09:15:01.623000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" event={"ID":"9ed03a6d-541b-4dde-91cf-887ee57ef0d2","Type":"ContainerStarted","Data":"38641dd55d4ce6768ebcf324bb262b00e0d61b928be2dbecf5e6acf58cbb373b"} Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.035759 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.166478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-config-volume\") pod \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.166638 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-secret-volume\") pod \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.166726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9jv\" (UniqueName: \"kubernetes.io/projected/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-kube-api-access-5x9jv\") pod \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\" (UID: \"9ed03a6d-541b-4dde-91cf-887ee57ef0d2\") " Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.167219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ed03a6d-541b-4dde-91cf-887ee57ef0d2" (UID: "9ed03a6d-541b-4dde-91cf-887ee57ef0d2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.173115 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-kube-api-access-5x9jv" (OuterVolumeSpecName: "kube-api-access-5x9jv") pod "9ed03a6d-541b-4dde-91cf-887ee57ef0d2" (UID: "9ed03a6d-541b-4dde-91cf-887ee57ef0d2"). InnerVolumeSpecName "kube-api-access-5x9jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.174084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ed03a6d-541b-4dde-91cf-887ee57ef0d2" (UID: "9ed03a6d-541b-4dde-91cf-887ee57ef0d2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.269304 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.269470 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9jv\" (UniqueName: \"kubernetes.io/projected/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-kube-api-access-5x9jv\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.269495 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ed03a6d-541b-4dde-91cf-887ee57ef0d2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.641457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" event={"ID":"9ed03a6d-541b-4dde-91cf-887ee57ef0d2","Type":"ContainerDied","Data":"38641dd55d4ce6768ebcf324bb262b00e0d61b928be2dbecf5e6acf58cbb373b"} Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.641513 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38641dd55d4ce6768ebcf324bb262b00e0d61b928be2dbecf5e6acf58cbb373b" Nov 22 09:15:03 crc kubenswrapper[4858]: I1122 09:15:03.641532 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-4wkzv" Nov 22 09:15:04 crc kubenswrapper[4858]: I1122 09:15:04.140907 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7"] Nov 22 09:15:04 crc kubenswrapper[4858]: I1122 09:15:04.147209 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s4pq7"] Nov 22 09:15:05 crc kubenswrapper[4858]: I1122 09:15:05.535609 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:15:05 crc kubenswrapper[4858]: E1122 09:15:05.535960 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:15:05 crc kubenswrapper[4858]: I1122 09:15:05.546165 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0d7c96-7624-4ac8-b72e-7536a0fb25b7" path="/var/lib/kubelet/pods/4c0d7c96-7624-4ac8-b72e-7536a0fb25b7/volumes" Nov 22 09:15:16 crc kubenswrapper[4858]: I1122 09:15:16.536813 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:15:16 crc kubenswrapper[4858]: E1122 09:15:16.537931 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:15:27 crc kubenswrapper[4858]: I1122 09:15:27.535076 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:15:27 crc kubenswrapper[4858]: E1122 09:15:27.535810 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:15:34 crc kubenswrapper[4858]: I1122 09:15:34.014013 4858 scope.go:117] "RemoveContainer" containerID="ad38c91fbcc2fb98dad6e27e00989dd7e079e72f0d4f05a28603bcb2da534b2b" Nov 22 09:15:34 crc kubenswrapper[4858]: I1122 09:15:34.032592 4858 scope.go:117] "RemoveContainer" containerID="2f565e19489e9b5580cd2ebb62a9f7a919ca4638f31c7d1bcc528fd9cb703f09" Nov 22 09:15:39 crc kubenswrapper[4858]: I1122 09:15:39.548282 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:15:39 crc kubenswrapper[4858]: E1122 09:15:39.549995 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:15:51 crc kubenswrapper[4858]: I1122 09:15:51.536485 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:15:51 crc kubenswrapper[4858]: E1122 09:15:51.537239 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:16:02 crc kubenswrapper[4858]: I1122 09:16:02.535587 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:16:02 crc kubenswrapper[4858]: E1122 09:16:02.536400 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.412424 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lg4kn"] Nov 22 09:16:16 crc kubenswrapper[4858]: E1122 09:16:16.414961 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed03a6d-541b-4dde-91cf-887ee57ef0d2" containerName="collect-profiles" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.418891 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed03a6d-541b-4dde-91cf-887ee57ef0d2" containerName="collect-profiles" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.419501 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed03a6d-541b-4dde-91cf-887ee57ef0d2" containerName="collect-profiles" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.421219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.425917 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lg4kn"] Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.535679 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:16:16 crc kubenswrapper[4858]: E1122 09:16:16.536090 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.603486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49cg\" (UniqueName: \"kubernetes.io/projected/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-kube-api-access-m49cg\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.603543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-utilities\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.604136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-catalog-content\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.705768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49cg\" (UniqueName: \"kubernetes.io/projected/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-kube-api-access-m49cg\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.705837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-utilities\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.705897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-catalog-content\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.706405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-catalog-content\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.706664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-utilities\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.732020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49cg\" (UniqueName: \"kubernetes.io/projected/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-kube-api-access-m49cg\") pod \"community-operators-lg4kn\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:16 crc kubenswrapper[4858]: I1122 09:16:16.747423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:17 crc kubenswrapper[4858]: I1122 09:16:17.246036 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lg4kn"] Nov 22 09:16:17 crc kubenswrapper[4858]: I1122 09:16:17.296183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerStarted","Data":"0508f73f6616e2205c861a7740d140a1c25c5f24605dfc94e6732a09c277242d"} Nov 22 09:16:18 crc kubenswrapper[4858]: I1122 09:16:18.305071 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerID="3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3" exitCode=0 Nov 22 09:16:18 crc kubenswrapper[4858]: I1122 09:16:18.305125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerDied","Data":"3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3"} Nov 22 09:16:19 crc kubenswrapper[4858]: I1122 09:16:19.314162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerStarted","Data":"ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e"} Nov 22 09:16:20 crc kubenswrapper[4858]: I1122 09:16:20.323678 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerID="ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e" exitCode=0 Nov 22 09:16:20 crc kubenswrapper[4858]: I1122 09:16:20.323720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerDied","Data":"ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e"} Nov 22 09:16:21 crc kubenswrapper[4858]: I1122 09:16:21.333289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerStarted","Data":"f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908"} Nov 22 09:16:21 crc kubenswrapper[4858]: I1122 09:16:21.362586 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lg4kn" podStartSLOduration=2.938875621 podStartE2EDuration="5.362564904s" podCreationTimestamp="2025-11-22 09:16:16 +0000 UTC" firstStartedPulling="2025-11-22 09:16:18.307298097 +0000 UTC m=+7540.148721103" lastFinishedPulling="2025-11-22 09:16:20.73098738 +0000 UTC m=+7542.572410386" observedRunningTime="2025-11-22 09:16:21.355529229 +0000 UTC m=+7543.196952255" watchObservedRunningTime="2025-11-22 09:16:21.362564904 +0000 UTC m=+7543.203987910" Nov 22 09:16:26 crc kubenswrapper[4858]: I1122 09:16:26.748004 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:26 crc kubenswrapper[4858]: I1122 09:16:26.748520 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:26 crc kubenswrapper[4858]: I1122 09:16:26.790921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:27 crc kubenswrapper[4858]: I1122 09:16:27.422697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:27 crc kubenswrapper[4858]: I1122 09:16:27.465314 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lg4kn"] Nov 22 09:16:27 crc kubenswrapper[4858]: I1122 09:16:27.536265 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:16:27 crc kubenswrapper[4858]: E1122 09:16:27.536521 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.413585 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lg4kn" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="registry-server" containerID="cri-o://f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908" gracePeriod=2 Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.816181 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.920647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-utilities\") pod \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.920782 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-catalog-content\") pod \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.920833 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m49cg\" (UniqueName: \"kubernetes.io/projected/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-kube-api-access-m49cg\") pod \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\" (UID: \"1ea7ae36-61bd-43b8-81d9-16a7fa34b705\") " Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.922029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-utilities" (OuterVolumeSpecName: "utilities") pod "1ea7ae36-61bd-43b8-81d9-16a7fa34b705" (UID: "1ea7ae36-61bd-43b8-81d9-16a7fa34b705"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.926223 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-kube-api-access-m49cg" (OuterVolumeSpecName: "kube-api-access-m49cg") pod "1ea7ae36-61bd-43b8-81d9-16a7fa34b705" (UID: "1ea7ae36-61bd-43b8-81d9-16a7fa34b705"). InnerVolumeSpecName "kube-api-access-m49cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:16:29 crc kubenswrapper[4858]: I1122 09:16:29.990760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ea7ae36-61bd-43b8-81d9-16a7fa34b705" (UID: "1ea7ae36-61bd-43b8-81d9-16a7fa34b705"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.022651 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.022688 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.022702 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m49cg\" (UniqueName: \"kubernetes.io/projected/1ea7ae36-61bd-43b8-81d9-16a7fa34b705-kube-api-access-m49cg\") on node \"crc\" DevicePath \"\"" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.421397 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerID="f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908" exitCode=0 Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.421457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerDied","Data":"f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908"} Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.422414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lg4kn" event={"ID":"1ea7ae36-61bd-43b8-81d9-16a7fa34b705","Type":"ContainerDied","Data":"0508f73f6616e2205c861a7740d140a1c25c5f24605dfc94e6732a09c277242d"} Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.422434 4858 scope.go:117] "RemoveContainer" containerID="f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.421467 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lg4kn" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.443867 4858 scope.go:117] "RemoveContainer" containerID="ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.458935 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lg4kn"] Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.465406 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lg4kn"] Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.472679 4858 scope.go:117] "RemoveContainer" containerID="3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.501763 4858 scope.go:117] "RemoveContainer" containerID="f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908" Nov 22 09:16:30 crc kubenswrapper[4858]: E1122 09:16:30.502214 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908\": container with ID starting with f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908 not found: ID does not exist" containerID="f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.502259 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908"} err="failed to get container status \"f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908\": rpc error: code = NotFound desc = could not find container \"f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908\": container with ID starting with f76783b62e3753b4c039da8e9c4041daa3a9939b22a53971c334941561a80908 not found: ID does not exist" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.502284 4858 scope.go:117] "RemoveContainer" containerID="ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e" Nov 22 09:16:30 crc kubenswrapper[4858]: E1122 09:16:30.502590 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e\": container with ID starting with ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e not found: ID does not exist" containerID="ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.502616 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e"} err="failed to get container status \"ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e\": rpc error: code = NotFound desc = could not find container \"ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e\": container with ID starting with ef0bb08f734c50c2e2c29c6e79ff27bce13a5900bb80ec7e8db7622b1cf3c60e not found: ID does not exist" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.502637 4858 scope.go:117] "RemoveContainer" containerID="3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3" Nov 22 09:16:30 crc kubenswrapper[4858]: E1122 09:16:30.503174 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3\": container with ID starting with 3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3 not found: ID does not exist" containerID="3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3" Nov 22 09:16:30 crc kubenswrapper[4858]: I1122 09:16:30.503219 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3"} err="failed to get container status \"3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3\": rpc error: code = NotFound desc = could not find container \"3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3\": container with ID starting with 3edb296f857892864460dac3d0f2a52c286539de7b85fb6f8683b25f61a539f3 not found: ID does not exist" Nov 22 09:16:31 crc kubenswrapper[4858]: I1122 09:16:31.544637 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" path="/var/lib/kubelet/pods/1ea7ae36-61bd-43b8-81d9-16a7fa34b705/volumes" Nov 22 09:16:34 crc kubenswrapper[4858]: I1122 09:16:34.103307 4858 scope.go:117] "RemoveContainer" containerID="7f6d71f0d8f6cb4ac9f8eb57ab5caf6f08d31aa9a2491b79d14bcbed9c16c4da" Nov 22 09:16:34 crc kubenswrapper[4858]: I1122 09:16:34.122216 4858 scope.go:117] "RemoveContainer" containerID="a0e5c83d1bdf0033b29b707437ad6ffc136e9d43c2ff58fb7eaa89f80c8380a4" Nov 22 09:16:34 crc kubenswrapper[4858]: I1122 09:16:34.158777 4858 scope.go:117] "RemoveContainer" containerID="f5d450601fd441e351dde75610edd54c37bcc0b91f5afee1c0ac387aee2c9bbf" Nov 22 09:16:42 crc kubenswrapper[4858]: I1122 09:16:42.535368 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:16:42 crc kubenswrapper[4858]: E1122 09:16:42.536133 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:16:56 crc kubenswrapper[4858]: I1122 09:16:56.535206 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:16:57 crc kubenswrapper[4858]: I1122 09:16:57.644135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"42f3d23d35406b2d38363bc66b651f22fd81645127e429253baa3074251843ed"} Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.375863 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m7t7w"] Nov 22 09:17:22 crc kubenswrapper[4858]: E1122 09:17:22.376843 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="registry-server" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.376857 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="registry-server" Nov 22 09:17:22 crc kubenswrapper[4858]: E1122 09:17:22.376880 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="extract-utilities" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.376886 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="extract-utilities" Nov 22 09:17:22 crc kubenswrapper[4858]: E1122 09:17:22.376908 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="extract-content" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.376914 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="extract-content" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.377061 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea7ae36-61bd-43b8-81d9-16a7fa34b705" containerName="registry-server" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.378194 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.384707 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7t7w"] Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.527855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-catalog-content\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.527919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgdp\" (UniqueName: \"kubernetes.io/projected/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-kube-api-access-vkgdp\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.528123 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-utilities\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.630132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-catalog-content\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.630188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgdp\" (UniqueName: \"kubernetes.io/projected/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-kube-api-access-vkgdp\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.630250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-utilities\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.631140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-catalog-content\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.631778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-utilities\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.655779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgdp\" (UniqueName: \"kubernetes.io/projected/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-kube-api-access-vkgdp\") pod \"redhat-marketplace-m7t7w\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:22 crc kubenswrapper[4858]: I1122 09:17:22.705775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:23 crc kubenswrapper[4858]: I1122 09:17:23.136337 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7t7w"] Nov 22 09:17:24 crc kubenswrapper[4858]: I1122 09:17:24.070809 4858 generic.go:334] "Generic (PLEG): container finished" podID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerID="7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642" exitCode=0 Nov 22 09:17:24 crc kubenswrapper[4858]: I1122 09:17:24.070921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7t7w" event={"ID":"f562dcb6-71c9-48d0-9440-5dca7fdabcc6","Type":"ContainerDied","Data":"7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642"} Nov 22 09:17:24 crc kubenswrapper[4858]: I1122 09:17:24.071142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7t7w" event={"ID":"f562dcb6-71c9-48d0-9440-5dca7fdabcc6","Type":"ContainerStarted","Data":"1d24b3b8d5917e702a7e6a5134ebc7eb8ea14f38bcc9d7dc3179939ebddefd4f"} Nov 22 09:17:25 crc kubenswrapper[4858]: I1122 09:17:25.087180 4858 generic.go:334] "Generic (PLEG): container finished" podID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerID="2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef" exitCode=0 Nov 22 09:17:25 crc kubenswrapper[4858]: I1122 09:17:25.087362 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7t7w" event={"ID":"f562dcb6-71c9-48d0-9440-5dca7fdabcc6","Type":"ContainerDied","Data":"2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef"} Nov 22 09:17:26 crc kubenswrapper[4858]: I1122 09:17:26.096939 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7t7w" event={"ID":"f562dcb6-71c9-48d0-9440-5dca7fdabcc6","Type":"ContainerStarted","Data":"c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa"} Nov 22 09:17:27 crc kubenswrapper[4858]: I1122 09:17:27.127358 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m7t7w" podStartSLOduration=3.293285224 podStartE2EDuration="5.127313108s" podCreationTimestamp="2025-11-22 09:17:22 +0000 UTC" firstStartedPulling="2025-11-22 09:17:24.074215011 +0000 UTC m=+7605.915638017" lastFinishedPulling="2025-11-22 09:17:25.908242895 +0000 UTC m=+7607.749665901" observedRunningTime="2025-11-22 09:17:27.120683706 +0000 UTC m=+7608.962106712" watchObservedRunningTime="2025-11-22 09:17:27.127313108 +0000 UTC m=+7608.968736114" Nov 22 09:17:32 crc kubenswrapper[4858]: I1122 09:17:32.706904 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:32 crc kubenswrapper[4858]: I1122 09:17:32.707832 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:32 crc kubenswrapper[4858]: I1122 09:17:32.763038 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:33 crc kubenswrapper[4858]: I1122 09:17:33.194136 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:33 crc kubenswrapper[4858]: I1122 09:17:33.326737 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7t7w"] Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.161749 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m7t7w" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="registry-server" containerID="cri-o://c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa" gracePeriod=2 Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.596846 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.723270 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-catalog-content\") pod \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.723390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-utilities\") pod \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.723517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkgdp\" (UniqueName: \"kubernetes.io/projected/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-kube-api-access-vkgdp\") pod \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\" (UID: \"f562dcb6-71c9-48d0-9440-5dca7fdabcc6\") " Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.724697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-utilities" (OuterVolumeSpecName: "utilities") pod "f562dcb6-71c9-48d0-9440-5dca7fdabcc6" (UID: "f562dcb6-71c9-48d0-9440-5dca7fdabcc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.728931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-kube-api-access-vkgdp" (OuterVolumeSpecName: "kube-api-access-vkgdp") pod "f562dcb6-71c9-48d0-9440-5dca7fdabcc6" (UID: "f562dcb6-71c9-48d0-9440-5dca7fdabcc6"). InnerVolumeSpecName "kube-api-access-vkgdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.745246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f562dcb6-71c9-48d0-9440-5dca7fdabcc6" (UID: "f562dcb6-71c9-48d0-9440-5dca7fdabcc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.824599 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.824628 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkgdp\" (UniqueName: \"kubernetes.io/projected/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-kube-api-access-vkgdp\") on node \"crc\" DevicePath \"\"" Nov 22 09:17:35 crc kubenswrapper[4858]: I1122 09:17:35.824638 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562dcb6-71c9-48d0-9440-5dca7fdabcc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.173230 4858 generic.go:334] "Generic (PLEG): container finished" podID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerID="c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa" exitCode=0 Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.173301 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7t7w" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.173367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7t7w" event={"ID":"f562dcb6-71c9-48d0-9440-5dca7fdabcc6","Type":"ContainerDied","Data":"c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa"} Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.174526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7t7w" event={"ID":"f562dcb6-71c9-48d0-9440-5dca7fdabcc6","Type":"ContainerDied","Data":"1d24b3b8d5917e702a7e6a5134ebc7eb8ea14f38bcc9d7dc3179939ebddefd4f"} Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.174547 4858 scope.go:117] "RemoveContainer" containerID="c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.195277 4858 scope.go:117] "RemoveContainer" containerID="2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.212836 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7t7w"] Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.216899 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7t7w"] Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.232262 4858 scope.go:117] "RemoveContainer" containerID="7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.250570 4858 scope.go:117] "RemoveContainer" containerID="c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa" Nov 22 09:17:36 crc kubenswrapper[4858]: E1122 09:17:36.251256 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa\": container with ID starting with c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa not found: ID does not exist" containerID="c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.251342 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa"} err="failed to get container status \"c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa\": rpc error: code = NotFound desc = could not find container \"c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa\": container with ID starting with c128db8a64f1846d711a32fa7595767b19600ae28173d511c4cbe565e80d2aaa not found: ID does not exist" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.251377 4858 scope.go:117] "RemoveContainer" containerID="2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef" Nov 22 09:17:36 crc kubenswrapper[4858]: E1122 09:17:36.251912 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef\": container with ID starting with 2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef not found: ID does not exist" containerID="2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.251945 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef"} err="failed to get container status \"2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef\": rpc error: code = NotFound desc = could not find container \"2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef\": container with ID starting with 2d66dba6f1bcc802147d044c36b87b5d3a688017c62803cbe1e16d4d8cef34ef not found: ID does not exist" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.251965 4858 scope.go:117] "RemoveContainer" containerID="7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642" Nov 22 09:17:36 crc kubenswrapper[4858]: E1122 09:17:36.252421 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642\": container with ID starting with 7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642 not found: ID does not exist" containerID="7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642" Nov 22 09:17:36 crc kubenswrapper[4858]: I1122 09:17:36.252454 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642"} err="failed to get container status \"7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642\": rpc error: code = NotFound desc = could not find container \"7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642\": container with ID starting with 7e9fc385aaafb05cd38805c6d3457b4f0a13c4f446b95edcd761e27aaeee2642 not found: ID does not exist" Nov 22 09:17:37 crc kubenswrapper[4858]: I1122 09:17:37.556996 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" path="/var/lib/kubelet/pods/f562dcb6-71c9-48d0-9440-5dca7fdabcc6/volumes" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.526894 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:17:57 crc kubenswrapper[4858]: E1122 09:17:57.527935 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="extract-utilities" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.527968 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="extract-utilities" Nov 22 09:17:57 crc kubenswrapper[4858]: E1122 09:17:57.528002 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="extract-content" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.528010 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="extract-content" Nov 22 09:17:57 crc kubenswrapper[4858]: E1122 09:17:57.528028 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="registry-server" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.528038 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="registry-server" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.528229 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f562dcb6-71c9-48d0-9440-5dca7fdabcc6" containerName="registry-server" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.528934 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.531184 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rphst" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.548680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.655040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9zd\" (UniqueName: \"kubernetes.io/projected/1e5b4cdf-1c7e-47c4-8921-00df1e643887-kube-api-access-gl9zd\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.655417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.756636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.756733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl9zd\" (UniqueName: \"kubernetes.io/projected/1e5b4cdf-1c7e-47c4-8921-00df1e643887-kube-api-access-gl9zd\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.762243 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.762289 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/644827a087341d0ae7940a723428dbf8af929d125ec18d7f325652e445f3f196/globalmount\"" pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.777204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl9zd\" (UniqueName: \"kubernetes.io/projected/1e5b4cdf-1c7e-47c4-8921-00df1e643887-kube-api-access-gl9zd\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.791501 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") pod \"mariadb-copy-data\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " pod="openstack/mariadb-copy-data" Nov 22 09:17:57 crc kubenswrapper[4858]: I1122 09:17:57.852524 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 09:17:58 crc kubenswrapper[4858]: I1122 09:17:58.339022 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:17:59 crc kubenswrapper[4858]: I1122 09:17:59.348066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1e5b4cdf-1c7e-47c4-8921-00df1e643887","Type":"ContainerStarted","Data":"5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4"} Nov 22 09:17:59 crc kubenswrapper[4858]: I1122 09:17:59.349445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1e5b4cdf-1c7e-47c4-8921-00df1e643887","Type":"ContainerStarted","Data":"cb5bbc881284fe0d354f5aa29de4977b7a3456be84eb03ccbded6bbce48dd679"} Nov 22 09:17:59 crc kubenswrapper[4858]: I1122 09:17:59.367155 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.367136257 podStartE2EDuration="3.367136257s" podCreationTimestamp="2025-11-22 09:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:17:59.358013955 +0000 UTC m=+7641.199436961" watchObservedRunningTime="2025-11-22 09:17:59.367136257 +0000 UTC m=+7641.208559263" Nov 22 09:18:02 crc kubenswrapper[4858]: I1122 09:18:02.832433 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:02 crc kubenswrapper[4858]: I1122 09:18:02.835137 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:02 crc kubenswrapper[4858]: I1122 09:18:02.839054 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:02 crc kubenswrapper[4858]: I1122 09:18:02.929288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcq6\" (UniqueName: \"kubernetes.io/projected/d30da568-bd21-4a0d-910b-013b524c394b-kube-api-access-4fcq6\") pod \"mariadb-client\" (UID: \"d30da568-bd21-4a0d-910b-013b524c394b\") " pod="openstack/mariadb-client" Nov 22 09:18:03 crc kubenswrapper[4858]: I1122 09:18:03.032626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcq6\" (UniqueName: \"kubernetes.io/projected/d30da568-bd21-4a0d-910b-013b524c394b-kube-api-access-4fcq6\") pod \"mariadb-client\" (UID: \"d30da568-bd21-4a0d-910b-013b524c394b\") " pod="openstack/mariadb-client" Nov 22 09:18:03 crc kubenswrapper[4858]: I1122 09:18:03.060563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcq6\" (UniqueName: \"kubernetes.io/projected/d30da568-bd21-4a0d-910b-013b524c394b-kube-api-access-4fcq6\") pod \"mariadb-client\" (UID: \"d30da568-bd21-4a0d-910b-013b524c394b\") " pod="openstack/mariadb-client" Nov 22 09:18:03 crc kubenswrapper[4858]: I1122 09:18:03.169741 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:03 crc kubenswrapper[4858]: I1122 09:18:03.615137 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:04 crc kubenswrapper[4858]: I1122 09:18:04.394870 4858 generic.go:334] "Generic (PLEG): container finished" podID="d30da568-bd21-4a0d-910b-013b524c394b" containerID="f0b40141f955e6ecca80e44f80ba04c41aaf32cba35716161f0d67543e4dace5" exitCode=0 Nov 22 09:18:04 crc kubenswrapper[4858]: I1122 09:18:04.394926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d30da568-bd21-4a0d-910b-013b524c394b","Type":"ContainerDied","Data":"f0b40141f955e6ecca80e44f80ba04c41aaf32cba35716161f0d67543e4dace5"} Nov 22 09:18:04 crc kubenswrapper[4858]: I1122 09:18:04.394960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d30da568-bd21-4a0d-910b-013b524c394b","Type":"ContainerStarted","Data":"bc388a919a55175075dd538a568cbfa9bec4bf95d86bbd632f4d71104106c7bc"} Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.676783 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.697539 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d30da568-bd21-4a0d-910b-013b524c394b/mariadb-client/0.log" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.724691 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.729678 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.771902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fcq6\" (UniqueName: \"kubernetes.io/projected/d30da568-bd21-4a0d-910b-013b524c394b-kube-api-access-4fcq6\") pod \"d30da568-bd21-4a0d-910b-013b524c394b\" (UID: \"d30da568-bd21-4a0d-910b-013b524c394b\") " Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.777249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30da568-bd21-4a0d-910b-013b524c394b-kube-api-access-4fcq6" (OuterVolumeSpecName: "kube-api-access-4fcq6") pod "d30da568-bd21-4a0d-910b-013b524c394b" (UID: "d30da568-bd21-4a0d-910b-013b524c394b"). InnerVolumeSpecName "kube-api-access-4fcq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.873257 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fcq6\" (UniqueName: \"kubernetes.io/projected/d30da568-bd21-4a0d-910b-013b524c394b-kube-api-access-4fcq6\") on node \"crc\" DevicePath \"\"" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.874784 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:05 crc kubenswrapper[4858]: E1122 09:18:05.875138 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30da568-bd21-4a0d-910b-013b524c394b" containerName="mariadb-client" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.875150 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30da568-bd21-4a0d-910b-013b524c394b" containerName="mariadb-client" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.875308 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30da568-bd21-4a0d-910b-013b524c394b" containerName="mariadb-client" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.875870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.882249 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:05 crc kubenswrapper[4858]: I1122 09:18:05.974691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzpr2\" (UniqueName: \"kubernetes.io/projected/f77438e2-fb5e-4e9a-a734-de677e459ef3-kube-api-access-kzpr2\") pod \"mariadb-client\" (UID: \"f77438e2-fb5e-4e9a-a734-de677e459ef3\") " pod="openstack/mariadb-client" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.076229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzpr2\" (UniqueName: \"kubernetes.io/projected/f77438e2-fb5e-4e9a-a734-de677e459ef3-kube-api-access-kzpr2\") pod \"mariadb-client\" (UID: \"f77438e2-fb5e-4e9a-a734-de677e459ef3\") " pod="openstack/mariadb-client" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.093273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzpr2\" (UniqueName: \"kubernetes.io/projected/f77438e2-fb5e-4e9a-a734-de677e459ef3-kube-api-access-kzpr2\") pod \"mariadb-client\" (UID: \"f77438e2-fb5e-4e9a-a734-de677e459ef3\") " pod="openstack/mariadb-client" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.230834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.424756 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc388a919a55175075dd538a568cbfa9bec4bf95d86bbd632f4d71104106c7bc" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.424824 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.445566 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="d30da568-bd21-4a0d-910b-013b524c394b" podUID="f77438e2-fb5e-4e9a-a734-de677e459ef3" Nov 22 09:18:06 crc kubenswrapper[4858]: I1122 09:18:06.641888 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:06 crc kubenswrapper[4858]: W1122 09:18:06.648024 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf77438e2_fb5e_4e9a_a734_de677e459ef3.slice/crio-819625a2b6254d07dc157ca8e85d6a863f11fc2e12c90a599e63b3f5008ffe81 WatchSource:0}: Error finding container 819625a2b6254d07dc157ca8e85d6a863f11fc2e12c90a599e63b3f5008ffe81: Status 404 returned error can't find the container with id 819625a2b6254d07dc157ca8e85d6a863f11fc2e12c90a599e63b3f5008ffe81 Nov 22 09:18:07 crc kubenswrapper[4858]: I1122 09:18:07.434647 4858 generic.go:334] "Generic (PLEG): container finished" podID="f77438e2-fb5e-4e9a-a734-de677e459ef3" containerID="a510d8c3e92e51ab3946f9d37e59fb9e99c0d3fa51267b9c9a3b6b5e9b1bcd34" exitCode=0 Nov 22 09:18:07 crc kubenswrapper[4858]: I1122 09:18:07.434685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"f77438e2-fb5e-4e9a-a734-de677e459ef3","Type":"ContainerDied","Data":"a510d8c3e92e51ab3946f9d37e59fb9e99c0d3fa51267b9c9a3b6b5e9b1bcd34"} Nov 22 09:18:07 crc kubenswrapper[4858]: I1122 09:18:07.434710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"f77438e2-fb5e-4e9a-a734-de677e459ef3","Type":"ContainerStarted","Data":"819625a2b6254d07dc157ca8e85d6a863f11fc2e12c90a599e63b3f5008ffe81"} Nov 22 09:18:07 crc kubenswrapper[4858]: I1122 09:18:07.543980 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30da568-bd21-4a0d-910b-013b524c394b" path="/var/lib/kubelet/pods/d30da568-bd21-4a0d-910b-013b524c394b/volumes" Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.716162 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.735758 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_f77438e2-fb5e-4e9a-a734-de677e459ef3/mariadb-client/0.log" Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.769528 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.776961 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.817616 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzpr2\" (UniqueName: \"kubernetes.io/projected/f77438e2-fb5e-4e9a-a734-de677e459ef3-kube-api-access-kzpr2\") pod \"f77438e2-fb5e-4e9a-a734-de677e459ef3\" (UID: \"f77438e2-fb5e-4e9a-a734-de677e459ef3\") " Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.825968 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77438e2-fb5e-4e9a-a734-de677e459ef3-kube-api-access-kzpr2" (OuterVolumeSpecName: "kube-api-access-kzpr2") pod "f77438e2-fb5e-4e9a-a734-de677e459ef3" (UID: "f77438e2-fb5e-4e9a-a734-de677e459ef3"). InnerVolumeSpecName "kube-api-access-kzpr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:18:08 crc kubenswrapper[4858]: I1122 09:18:08.919638 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzpr2\" (UniqueName: \"kubernetes.io/projected/f77438e2-fb5e-4e9a-a734-de677e459ef3-kube-api-access-kzpr2\") on node \"crc\" DevicePath \"\"" Nov 22 09:18:09 crc kubenswrapper[4858]: I1122 09:18:09.453166 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="819625a2b6254d07dc157ca8e85d6a863f11fc2e12c90a599e63b3f5008ffe81" Nov 22 09:18:09 crc kubenswrapper[4858]: I1122 09:18:09.453288 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 09:18:09 crc kubenswrapper[4858]: I1122 09:18:09.548483 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77438e2-fb5e-4e9a-a734-de677e459ef3" path="/var/lib/kubelet/pods/f77438e2-fb5e-4e9a-a734-de677e459ef3/volumes" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.311618 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5867"] Nov 22 09:18:49 crc kubenswrapper[4858]: E1122 09:18:49.312804 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77438e2-fb5e-4e9a-a734-de677e459ef3" containerName="mariadb-client" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.312820 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77438e2-fb5e-4e9a-a734-de677e459ef3" containerName="mariadb-client" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.313209 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77438e2-fb5e-4e9a-a734-de677e459ef3" containerName="mariadb-client" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.314931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.322968 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5867"] Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.367225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cz55\" (UniqueName: \"kubernetes.io/projected/3da08508-265b-4056-884c-f4ac7714447a-kube-api-access-6cz55\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.367373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-catalog-content\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.367724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-utilities\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.470130 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-utilities\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.470252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cz55\" (UniqueName: \"kubernetes.io/projected/3da08508-265b-4056-884c-f4ac7714447a-kube-api-access-6cz55\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.470282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-catalog-content\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.470815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-catalog-content\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.471107 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-utilities\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.492344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cz55\" (UniqueName: \"kubernetes.io/projected/3da08508-265b-4056-884c-f4ac7714447a-kube-api-access-6cz55\") pod \"redhat-operators-s5867\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:49 crc kubenswrapper[4858]: I1122 09:18:49.641514 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:50 crc kubenswrapper[4858]: I1122 09:18:50.128770 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5867"] Nov 22 09:18:50 crc kubenswrapper[4858]: I1122 09:18:50.772169 4858 generic.go:334] "Generic (PLEG): container finished" podID="3da08508-265b-4056-884c-f4ac7714447a" containerID="71dc22e775ff5eb915aba2b3fb9e8772b931fd79cadbad53941123d5690cb563" exitCode=0 Nov 22 09:18:50 crc kubenswrapper[4858]: I1122 09:18:50.772236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerDied","Data":"71dc22e775ff5eb915aba2b3fb9e8772b931fd79cadbad53941123d5690cb563"} Nov 22 09:18:50 crc kubenswrapper[4858]: I1122 09:18:50.772277 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerStarted","Data":"6a99c68eabb82584071358e67401845e862a6b5d7d1c5fbad43ba7e0b4f227cd"} Nov 22 09:18:51 crc kubenswrapper[4858]: I1122 09:18:51.781605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerStarted","Data":"2861976864b9cb2e6d54b32e5fe533d8f20f154c6ea792ff0bd039a61d96269f"} Nov 22 09:18:52 crc kubenswrapper[4858]: I1122 09:18:52.790418 4858 generic.go:334] "Generic (PLEG): container finished" podID="3da08508-265b-4056-884c-f4ac7714447a" containerID="2861976864b9cb2e6d54b32e5fe533d8f20f154c6ea792ff0bd039a61d96269f" exitCode=0 Nov 22 09:18:52 crc kubenswrapper[4858]: I1122 09:18:52.790467 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerDied","Data":"2861976864b9cb2e6d54b32e5fe533d8f20f154c6ea792ff0bd039a61d96269f"} Nov 22 09:18:53 crc kubenswrapper[4858]: I1122 09:18:53.802932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerStarted","Data":"b48d41200525c502bc964dc2d7f83d5545dde4e6b95a0033b4bf02b7f6bc99d5"} Nov 22 09:18:53 crc kubenswrapper[4858]: I1122 09:18:53.830545 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5867" podStartSLOduration=2.41301168 podStartE2EDuration="4.830519136s" podCreationTimestamp="2025-11-22 09:18:49 +0000 UTC" firstStartedPulling="2025-11-22 09:18:50.77462574 +0000 UTC m=+7692.616048746" lastFinishedPulling="2025-11-22 09:18:53.192133196 +0000 UTC m=+7695.033556202" observedRunningTime="2025-11-22 09:18:53.826171558 +0000 UTC m=+7695.667594594" watchObservedRunningTime="2025-11-22 09:18:53.830519136 +0000 UTC m=+7695.671942142" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.243366 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.247903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.252515 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.252755 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.252894 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.255954 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rwvmp" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.256200 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.268960 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.270607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.279703 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.299602 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.301835 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.322333 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.332358 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385659 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67gz\" (UniqueName: \"kubernetes.io/projected/11d61978-7f41-441f-b6b7-18c00e684f58-kube-api-access-l67gz\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385734 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d61978-7f41-441f-b6b7-18c00e684f58-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385969 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11d61978-7f41-441f-b6b7-18c00e684f58-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.385996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d61978-7f41-441f-b6b7-18c00e684f58-config\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386131 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvdj\" (UniqueName: \"kubernetes.io/projected/81d944bd-93c5-4863-96df-f83a4ff1db9b-kube-api-access-tpvdj\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1da1b802-a402-4719-966b-e47486a0b6e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1da1b802-a402-4719-966b-e47486a0b6e9\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386304 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.386415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487386 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgnq\" (UniqueName: \"kubernetes.io/projected/ac8a3c95-b813-4505-925f-8e750fd8f963-kube-api-access-ckgnq\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487456 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8a3c95-b813-4505-925f-8e750fd8f963-config\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67gz\" (UniqueName: \"kubernetes.io/projected/11d61978-7f41-441f-b6b7-18c00e684f58-kube-api-access-l67gz\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.487728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac8a3c95-b813-4505-925f-8e750fd8f963-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac8a3c95-b813-4505-925f-8e750fd8f963-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488203 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d61978-7f41-441f-b6b7-18c00e684f58-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11d61978-7f41-441f-b6b7-18c00e684f58-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.488837 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d61978-7f41-441f-b6b7-18c00e684f58-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11d61978-7f41-441f-b6b7-18c00e684f58-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d61978-7f41-441f-b6b7-18c00e684f58-config\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpvdj\" (UniqueName: \"kubernetes.io/projected/81d944bd-93c5-4863-96df-f83a4ff1db9b-kube-api-access-tpvdj\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1da1b802-a402-4719-966b-e47486a0b6e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1da1b802-a402-4719-966b-e47486a0b6e9\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.489545 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.490392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.490974 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d61978-7f41-441f-b6b7-18c00e684f58-config\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.491465 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.491487 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1da1b802-a402-4719-966b-e47486a0b6e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1da1b802-a402-4719-966b-e47486a0b6e9\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb4e9f6661df869c28946638b6395f4e6c19b04f0875c3ab560f29c8eeff14dd/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.491519 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.491579 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2e508d209919662b7ee97c5960dd377e9dbd3298fe5723dda4b4fef8de7b7184/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.493599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.494212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.498268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.499135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.499765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.505374 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.508591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d61978-7f41-441f-b6b7-18c00e684f58-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.509678 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67gz\" (UniqueName: \"kubernetes.io/projected/11d61978-7f41-441f-b6b7-18c00e684f58-kube-api-access-l67gz\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.511997 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpvdj\" (UniqueName: \"kubernetes.io/projected/81d944bd-93c5-4863-96df-f83a4ff1db9b-kube-api-access-tpvdj\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.532122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") pod \"ovsdbserver-nb-0\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.536662 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1da1b802-a402-4719-966b-e47486a0b6e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1da1b802-a402-4719-966b-e47486a0b6e9\") pod \"ovsdbserver-nb-2\" (UID: \"11d61978-7f41-441f-b6b7-18c00e684f58\") " pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.579982 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.591718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.591829 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckgnq\" (UniqueName: \"kubernetes.io/projected/ac8a3c95-b813-4505-925f-8e750fd8f963-kube-api-access-ckgnq\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.591864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.591890 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.591916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.591968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8a3c95-b813-4505-925f-8e750fd8f963-config\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.592007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac8a3c95-b813-4505-925f-8e750fd8f963-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.592029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac8a3c95-b813-4505-925f-8e750fd8f963-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.594136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac8a3c95-b813-4505-925f-8e750fd8f963-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.596845 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac8a3c95-b813-4505-925f-8e750fd8f963-config\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.596925 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac8a3c95-b813-4505-925f-8e750fd8f963-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.597388 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.600598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.601270 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.601303 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99c737a9627867d5d74aa2d61befa7328eadd30b7dbb19cd9ccf0a26ffc67f0e/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.605515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.614540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8a3c95-b813-4505-925f-8e750fd8f963-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.617971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckgnq\" (UniqueName: \"kubernetes.io/projected/ac8a3c95-b813-4505-925f-8e750fd8f963-kube-api-access-ckgnq\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.644742 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.645966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-452dd406-bdb4-4cb3-8808-34eb6787adfa\") pod \"ovsdbserver-nb-1\" (UID: \"ac8a3c95-b813-4505-925f-8e750fd8f963\") " pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.647936 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.717277 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.891870 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.926621 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.961734 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5867"] Nov 22 09:18:59 crc kubenswrapper[4858]: I1122 09:18:59.988115 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 09:18:59 crc kubenswrapper[4858]: W1122 09:18:59.996918 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d61978_7f41_441f_b6b7_18c00e684f58.slice/crio-e524d2b14b196b49dfb6b04c83e9bf6e0e357a50894a58946bd1c689a489b716 WatchSource:0}: Error finding container e524d2b14b196b49dfb6b04c83e9bf6e0e357a50894a58946bd1c689a489b716: Status 404 returned error can't find the container with id e524d2b14b196b49dfb6b04c83e9bf6e0e357a50894a58946bd1c689a489b716 Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.237768 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 09:19:00 crc kubenswrapper[4858]: W1122 09:19:00.241263 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81d944bd_93c5_4863_96df_f83a4ff1db9b.slice/crio-811a3ce45e287556d16b4d6593dbcfea5e69d932639001cc8c108a7643049658 WatchSource:0}: Error finding container 811a3ce45e287556d16b4d6593dbcfea5e69d932639001cc8c108a7643049658: Status 404 returned error can't find the container with id 811a3ce45e287556d16b4d6593dbcfea5e69d932639001cc8c108a7643049658 Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.428441 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 09:19:00 crc kubenswrapper[4858]: W1122 09:19:00.438222 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8a3c95_b813_4505_925f_8e750fd8f963.slice/crio-0161cb79aaa119de25c1bd8c3ec664da11cda91f573f921f3f31a31bed39bdd0 WatchSource:0}: Error finding container 0161cb79aaa119de25c1bd8c3ec664da11cda91f573f921f3f31a31bed39bdd0: Status 404 returned error can't find the container with id 0161cb79aaa119de25c1bd8c3ec664da11cda91f573f921f3f31a31bed39bdd0 Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.677175 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.678794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.685700 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.687548 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.693713 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.693890 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.693713 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.694067 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fjpw6" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.698535 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.711010 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.712677 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.718455 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.729795 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.829874 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f75666bc-124a-43de-b87e-692947cbd508\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.829927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.829960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.829989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca469349-62e4-4ab0-bba0-66bc5d4c1956-config\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ca469349-62e4-4ab0-bba0-66bc5d4c1956-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830061 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf4db\" (UniqueName: \"kubernetes.io/projected/ca469349-62e4-4ab0-bba0-66bc5d4c1956-kube-api-access-zf4db\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830552 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgv9l\" (UniqueName: \"kubernetes.io/projected/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-kube-api-access-bgv9l\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca469349-62e4-4ab0-bba0-66bc5d4c1956-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-config\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddht\" (UniqueName: \"kubernetes.io/projected/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-kube-api-access-pddht\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-config\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.830897 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.858978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ac8a3c95-b813-4505-925f-8e750fd8f963","Type":"ContainerStarted","Data":"0161cb79aaa119de25c1bd8c3ec664da11cda91f573f921f3f31a31bed39bdd0"} Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.860051 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"81d944bd-93c5-4863-96df-f83a4ff1db9b","Type":"ContainerStarted","Data":"811a3ce45e287556d16b4d6593dbcfea5e69d932639001cc8c108a7643049658"} Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.862120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"11d61978-7f41-441f-b6b7-18c00e684f58","Type":"ContainerStarted","Data":"e524d2b14b196b49dfb6b04c83e9bf6e0e357a50894a58946bd1c689a489b716"} Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca469349-62e4-4ab0-bba0-66bc5d4c1956-config\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ca469349-62e4-4ab0-bba0-66bc5d4c1956-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf4db\" (UniqueName: \"kubernetes.io/projected/ca469349-62e4-4ab0-bba0-66bc5d4c1956-kube-api-access-zf4db\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.932985 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933033 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgv9l\" (UniqueName: \"kubernetes.io/projected/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-kube-api-access-bgv9l\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca469349-62e4-4ab0-bba0-66bc5d4c1956-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933120 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-config\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddht\" (UniqueName: \"kubernetes.io/projected/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-kube-api-access-pddht\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-config\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.933346 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f75666bc-124a-43de-b87e-692947cbd508\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.934155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.934475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca469349-62e4-4ab0-bba0-66bc5d4c1956-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.934727 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca469349-62e4-4ab0-bba0-66bc5d4c1956-config\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.934849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.935155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-config\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.935364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.936033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ca469349-62e4-4ab0-bba0-66bc5d4c1956-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.936633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.937239 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.937278 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bf0daad9fcea95764ec71d7f5e90d0765a581ab547c15fc1a33436511948cdd4/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.937598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-config\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.939122 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.939160 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0cbaff1778b44a8d3288833adc2864434ed7a4b97687fd1aab9b76c8b4d2c4fd/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.939842 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.939876 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f75666bc-124a-43de-b87e-692947cbd508\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/651f2bd29622c07126e03f69b009f9c04be40a4e9c76dd214a7b20f4c6dd7894/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.939932 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.940488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.940595 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.940626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.942953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.947389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.949594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.951567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.951890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf4db\" (UniqueName: \"kubernetes.io/projected/ca469349-62e4-4ab0-bba0-66bc5d4c1956-kube-api-access-zf4db\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.953241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca469349-62e4-4ab0-bba0-66bc5d4c1956-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.956569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddht\" (UniqueName: \"kubernetes.io/projected/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-kube-api-access-pddht\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.960085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgv9l\" (UniqueName: \"kubernetes.io/projected/c7a007d2-4a0e-44bd-981f-8a56cbd45c50-kube-api-access-bgv9l\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.967456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f75666bc-124a-43de-b87e-692947cbd508\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") pod \"ovsdbserver-sb-0\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.973488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1368c9ae-e8f3-475e-aee2-e2742e930568\") pod \"ovsdbserver-sb-1\" (UID: \"ca469349-62e4-4ab0-bba0-66bc5d4c1956\") " pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:00 crc kubenswrapper[4858]: I1122 09:19:00.976615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c54dbf-20e9-4e6a-82c0-d465e0633105\") pod \"ovsdbserver-sb-2\" (UID: \"c7a007d2-4a0e-44bd-981f-8a56cbd45c50\") " pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.015426 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.030468 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.043657 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.559921 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.665440 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.876631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"c7a007d2-4a0e-44bd-981f-8a56cbd45c50","Type":"ContainerStarted","Data":"d2965e42f3424bbbcd08d1ab5be3bd93eb9dc40be20c408a522a4011d7426689"} Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.878107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"178ee462-fc5c-4fc1-bdbc-22251a60c6a1","Type":"ContainerStarted","Data":"66b2030990f3445a98e8b16092db748e3ec88e952625b09bd8cf6dca7cb4085a"} Nov 22 09:19:01 crc kubenswrapper[4858]: I1122 09:19:01.878252 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5867" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="registry-server" containerID="cri-o://b48d41200525c502bc964dc2d7f83d5545dde4e6b95a0033b4bf02b7f6bc99d5" gracePeriod=2 Nov 22 09:19:02 crc kubenswrapper[4858]: I1122 09:19:02.162588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 09:19:02 crc kubenswrapper[4858]: I1122 09:19:02.890563 4858 generic.go:334] "Generic (PLEG): container finished" podID="3da08508-265b-4056-884c-f4ac7714447a" containerID="b48d41200525c502bc964dc2d7f83d5545dde4e6b95a0033b4bf02b7f6bc99d5" exitCode=0 Nov 22 09:19:02 crc kubenswrapper[4858]: I1122 09:19:02.890605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerDied","Data":"b48d41200525c502bc964dc2d7f83d5545dde4e6b95a0033b4bf02b7f6bc99d5"} Nov 22 09:19:03 crc kubenswrapper[4858]: W1122 09:19:03.648641 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca469349_62e4_4ab0_bba0_66bc5d4c1956.slice/crio-79ce15726f2526ce918d661f74098aa459870146e172fbf00268ac7426666551 WatchSource:0}: Error finding container 79ce15726f2526ce918d661f74098aa459870146e172fbf00268ac7426666551: Status 404 returned error can't find the container with id 79ce15726f2526ce918d661f74098aa459870146e172fbf00268ac7426666551 Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.740499 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.903194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5867" event={"ID":"3da08508-265b-4056-884c-f4ac7714447a","Type":"ContainerDied","Data":"6a99c68eabb82584071358e67401845e862a6b5d7d1c5fbad43ba7e0b4f227cd"} Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.903241 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5867" Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.903246 4858 scope.go:117] "RemoveContainer" containerID="b48d41200525c502bc964dc2d7f83d5545dde4e6b95a0033b4bf02b7f6bc99d5" Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.904339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ca469349-62e4-4ab0-bba0-66bc5d4c1956","Type":"ContainerStarted","Data":"79ce15726f2526ce918d661f74098aa459870146e172fbf00268ac7426666551"} Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.907129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-catalog-content\") pod \"3da08508-265b-4056-884c-f4ac7714447a\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.907259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-utilities\") pod \"3da08508-265b-4056-884c-f4ac7714447a\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.907389 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cz55\" (UniqueName: \"kubernetes.io/projected/3da08508-265b-4056-884c-f4ac7714447a-kube-api-access-6cz55\") pod \"3da08508-265b-4056-884c-f4ac7714447a\" (UID: \"3da08508-265b-4056-884c-f4ac7714447a\") " Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.908037 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-utilities" (OuterVolumeSpecName: "utilities") pod "3da08508-265b-4056-884c-f4ac7714447a" (UID: "3da08508-265b-4056-884c-f4ac7714447a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.910209 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:03 crc kubenswrapper[4858]: I1122 09:19:03.922591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da08508-265b-4056-884c-f4ac7714447a-kube-api-access-6cz55" (OuterVolumeSpecName: "kube-api-access-6cz55") pod "3da08508-265b-4056-884c-f4ac7714447a" (UID: "3da08508-265b-4056-884c-f4ac7714447a"). InnerVolumeSpecName "kube-api-access-6cz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.012276 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cz55\" (UniqueName: \"kubernetes.io/projected/3da08508-265b-4056-884c-f4ac7714447a-kube-api-access-6cz55\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.015996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3da08508-265b-4056-884c-f4ac7714447a" (UID: "3da08508-265b-4056-884c-f4ac7714447a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.029309 4858 scope.go:117] "RemoveContainer" containerID="2861976864b9cb2e6d54b32e5fe533d8f20f154c6ea792ff0bd039a61d96269f" Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.086581 4858 scope.go:117] "RemoveContainer" containerID="71dc22e775ff5eb915aba2b3fb9e8772b931fd79cadbad53941123d5690cb563" Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.113365 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da08508-265b-4056-884c-f4ac7714447a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.238700 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5867"] Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.244571 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5867"] Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.913832 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ac8a3c95-b813-4505-925f-8e750fd8f963","Type":"ContainerStarted","Data":"50905acc36a255921d46a810e8ba9639e45ff834c28a93222166fdb782d234e0"} Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.916184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ca469349-62e4-4ab0-bba0-66bc5d4c1956","Type":"ContainerStarted","Data":"6ddf69b96ffe23830184389245ff9d997e488b6424b579972a45304e19ea369f"} Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.918049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"178ee462-fc5c-4fc1-bdbc-22251a60c6a1","Type":"ContainerStarted","Data":"ed0fb13c9d313c0057e131d50ff2e7899fad257cf3bed38b14bae9253765bc88"} Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.920060 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"81d944bd-93c5-4863-96df-f83a4ff1db9b","Type":"ContainerStarted","Data":"dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d"} Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.925811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"c7a007d2-4a0e-44bd-981f-8a56cbd45c50","Type":"ContainerStarted","Data":"1a79ba76faacbe87bafaa6e1a6b8b6d50654793862048de7d9d1476776d92466"} Nov 22 09:19:04 crc kubenswrapper[4858]: I1122 09:19:04.928391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"11d61978-7f41-441f-b6b7-18c00e684f58","Type":"ContainerStarted","Data":"d26aab70979258f603c33c3d37146394d630de76f8d4f6960798c3c32852e72f"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.547003 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da08508-265b-4056-884c-f4ac7714447a" path="/var/lib/kubelet/pods/3da08508-265b-4056-884c-f4ac7714447a/volumes" Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.939644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"c7a007d2-4a0e-44bd-981f-8a56cbd45c50","Type":"ContainerStarted","Data":"b4124e3465da381167bea724f4e7c9031ca9de71759512fab97869e75d371785"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.943911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"11d61978-7f41-441f-b6b7-18c00e684f58","Type":"ContainerStarted","Data":"f6309275ad269cfccf8fdb8ad351b57fbe51319c375fa31979be8f85cecadbc3"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.946296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ac8a3c95-b813-4505-925f-8e750fd8f963","Type":"ContainerStarted","Data":"32aede7cf2cd8fdaade9c4d13c3a6835fbd9e148895541f8bdc84ec2cf52db21"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.948789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ca469349-62e4-4ab0-bba0-66bc5d4c1956","Type":"ContainerStarted","Data":"216b5090996fa3e09c1c2111cb8e5434b009b80a4f59aa7219be2938522a73c1"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.951711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"178ee462-fc5c-4fc1-bdbc-22251a60c6a1","Type":"ContainerStarted","Data":"81501abf74e9ba60e651cc176ffd7cdf6b825e1cbbf8a19b79bc21f69b3efd8e"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.956291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"81d944bd-93c5-4863-96df-f83a4ff1db9b","Type":"ContainerStarted","Data":"b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217"} Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.994381 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.217895127 podStartE2EDuration="6.994365032s" podCreationTimestamp="2025-11-22 09:18:59 +0000 UTC" firstStartedPulling="2025-11-22 09:19:01.673078779 +0000 UTC m=+7703.514501785" lastFinishedPulling="2025-11-22 09:19:04.449548674 +0000 UTC m=+7706.290971690" observedRunningTime="2025-11-22 09:19:05.970107496 +0000 UTC m=+7707.811530512" watchObservedRunningTime="2025-11-22 09:19:05.994365032 +0000 UTC m=+7707.835788038" Nov 22 09:19:05 crc kubenswrapper[4858]: I1122 09:19:05.997424 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=6.19253609 podStartE2EDuration="6.997415539s" podCreationTimestamp="2025-11-22 09:18:59 +0000 UTC" firstStartedPulling="2025-11-22 09:19:03.65168812 +0000 UTC m=+7705.493111126" lastFinishedPulling="2025-11-22 09:19:04.456567569 +0000 UTC m=+7706.297990575" observedRunningTime="2025-11-22 09:19:05.990298802 +0000 UTC m=+7707.831721848" watchObservedRunningTime="2025-11-22 09:19:05.997415539 +0000 UTC m=+7707.838838545" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.016283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.025242 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.897454519 podStartE2EDuration="8.025220549s" podCreationTimestamp="2025-11-22 09:18:58 +0000 UTC" firstStartedPulling="2025-11-22 09:19:00.243735966 +0000 UTC m=+7702.085158972" lastFinishedPulling="2025-11-22 09:19:04.371501996 +0000 UTC m=+7706.212925002" observedRunningTime="2025-11-22 09:19:06.016008855 +0000 UTC m=+7707.857431891" watchObservedRunningTime="2025-11-22 09:19:06.025220549 +0000 UTC m=+7707.866643575" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.031226 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.041899 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.100008942 podStartE2EDuration="8.041882302s" podCreationTimestamp="2025-11-22 09:18:58 +0000 UTC" firstStartedPulling="2025-11-22 09:19:00.440787683 +0000 UTC m=+7702.282210689" lastFinishedPulling="2025-11-22 09:19:04.382661043 +0000 UTC m=+7706.224084049" observedRunningTime="2025-11-22 09:19:06.036063326 +0000 UTC m=+7707.877486342" watchObservedRunningTime="2025-11-22 09:19:06.041882302 +0000 UTC m=+7707.883305308" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.043848 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.061169 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.68961788 podStartE2EDuration="8.06114714s" podCreationTimestamp="2025-11-22 09:18:58 +0000 UTC" firstStartedPulling="2025-11-22 09:18:59.999972006 +0000 UTC m=+7701.841395012" lastFinishedPulling="2025-11-22 09:19:04.371501266 +0000 UTC m=+7706.212924272" observedRunningTime="2025-11-22 09:19:06.051950895 +0000 UTC m=+7707.893373891" watchObservedRunningTime="2025-11-22 09:19:06.06114714 +0000 UTC m=+7707.902570166" Nov 22 09:19:06 crc kubenswrapper[4858]: I1122 09:19:06.073191 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.245021327 podStartE2EDuration="7.073169064s" podCreationTimestamp="2025-11-22 09:18:59 +0000 UTC" firstStartedPulling="2025-11-22 09:19:01.604105272 +0000 UTC m=+7703.445528279" lastFinishedPulling="2025-11-22 09:19:04.43225299 +0000 UTC m=+7706.273676016" observedRunningTime="2025-11-22 09:19:06.071734058 +0000 UTC m=+7707.913157084" watchObservedRunningTime="2025-11-22 09:19:06.073169064 +0000 UTC m=+7707.914592100" Nov 22 09:19:07 crc kubenswrapper[4858]: I1122 09:19:07.016676 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:07 crc kubenswrapper[4858]: I1122 09:19:07.031709 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:07 crc kubenswrapper[4858]: I1122 09:19:07.044764 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:07 crc kubenswrapper[4858]: I1122 09:19:07.056238 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:07 crc kubenswrapper[4858]: I1122 09:19:07.091942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:07 crc kubenswrapper[4858]: I1122 09:19:07.110422 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.580882 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.598782 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.624580 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.645820 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.927549 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.965596 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.981061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.981380 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 09:19:08 crc kubenswrapper[4858]: I1122 09:19:08.981393 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.641297 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.646802 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.919593 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55cd558fc5-7b7d8"] Nov 22 09:19:09 crc kubenswrapper[4858]: E1122 09:19:09.919890 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="extract-content" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.919902 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="extract-content" Nov 22 09:19:09 crc kubenswrapper[4858]: E1122 09:19:09.919935 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="extract-utilities" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.919941 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="extract-utilities" Nov 22 09:19:09 crc kubenswrapper[4858]: E1122 09:19:09.919951 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="registry-server" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.919958 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="registry-server" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.920126 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da08508-265b-4056-884c-f4ac7714447a" containerName="registry-server" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.922227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.926438 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.927377 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55cd558fc5-7b7d8"] Nov 22 09:19:09 crc kubenswrapper[4858]: I1122 09:19:09.970682 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.021854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-ovsdbserver-nb\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.022081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7sn5\" (UniqueName: \"kubernetes.io/projected/15ee33a2-282c-4c83-aa73-1e1cbffab158-kube-api-access-p7sn5\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.022124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-dns-svc\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.022181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-config\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.123982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7sn5\" (UniqueName: \"kubernetes.io/projected/15ee33a2-282c-4c83-aa73-1e1cbffab158-kube-api-access-p7sn5\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.124066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-dns-svc\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.124144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-config\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.124986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-dns-svc\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.125864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-config\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.126434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-ovsdbserver-nb\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.127105 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-ovsdbserver-nb\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.141681 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7sn5\" (UniqueName: \"kubernetes.io/projected/15ee33a2-282c-4c83-aa73-1e1cbffab158-kube-api-access-p7sn5\") pod \"dnsmasq-dns-55cd558fc5-7b7d8\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.251769 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:10 crc kubenswrapper[4858]: I1122 09:19:10.701590 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55cd558fc5-7b7d8"] Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.009641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" event={"ID":"15ee33a2-282c-4c83-aa73-1e1cbffab158","Type":"ContainerStarted","Data":"a94158112a80fcc59518300acf51b8f3b48e675fa3044761571d400360c876d1"} Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.065099 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.084384 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.098774 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.255585 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55cd558fc5-7b7d8"] Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.292977 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fbd96959f-cfv26"] Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.294655 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.296880 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.302464 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbd96959f-cfv26"] Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.348805 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.348921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-dns-svc\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.348957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945s6\" (UniqueName: \"kubernetes.io/projected/a374cd19-18a6-4859-988f-3150a915ef2a-kube-api-access-945s6\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.349073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.349137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-config\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.451052 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-config\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.451417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.451491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-dns-svc\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.451518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-945s6\" (UniqueName: \"kubernetes.io/projected/a374cd19-18a6-4859-988f-3150a915ef2a-kube-api-access-945s6\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.451600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.452029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-config\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.452430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-dns-svc\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.452479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.452597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.471536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-945s6\" (UniqueName: \"kubernetes.io/projected/a374cd19-18a6-4859-988f-3150a915ef2a-kube-api-access-945s6\") pod \"dnsmasq-dns-6fbd96959f-cfv26\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:11 crc kubenswrapper[4858]: I1122 09:19:11.616333 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:12 crc kubenswrapper[4858]: I1122 09:19:12.017907 4858 generic.go:334] "Generic (PLEG): container finished" podID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerID="38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0" exitCode=0 Nov 22 09:19:12 crc kubenswrapper[4858]: I1122 09:19:12.018013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" event={"ID":"15ee33a2-282c-4c83-aa73-1e1cbffab158","Type":"ContainerDied","Data":"38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0"} Nov 22 09:19:12 crc kubenswrapper[4858]: I1122 09:19:12.681963 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbd96959f-cfv26"] Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.028608 4858 generic.go:334] "Generic (PLEG): container finished" podID="a374cd19-18a6-4859-988f-3150a915ef2a" containerID="560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770" exitCode=0 Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.028711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" event={"ID":"a374cd19-18a6-4859-988f-3150a915ef2a","Type":"ContainerDied","Data":"560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770"} Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.028794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" event={"ID":"a374cd19-18a6-4859-988f-3150a915ef2a","Type":"ContainerStarted","Data":"79ffd20f6568f1439b2922184ca80fb51f5907cddd03b8c84cb6ab8d28f3a9de"} Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.034117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" event={"ID":"15ee33a2-282c-4c83-aa73-1e1cbffab158","Type":"ContainerStarted","Data":"290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa"} Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.034239 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerName="dnsmasq-dns" containerID="cri-o://290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa" gracePeriod=10 Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.034374 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.085008 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" podStartSLOduration=4.08498825 podStartE2EDuration="4.08498825s" podCreationTimestamp="2025-11-22 09:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:13.08093686 +0000 UTC m=+7714.922359866" watchObservedRunningTime="2025-11-22 09:19:13.08498825 +0000 UTC m=+7714.926411266" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.530428 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.589262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-dns-svc\") pod \"15ee33a2-282c-4c83-aa73-1e1cbffab158\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.589383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7sn5\" (UniqueName: \"kubernetes.io/projected/15ee33a2-282c-4c83-aa73-1e1cbffab158-kube-api-access-p7sn5\") pod \"15ee33a2-282c-4c83-aa73-1e1cbffab158\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.589442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-ovsdbserver-nb\") pod \"15ee33a2-282c-4c83-aa73-1e1cbffab158\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.589523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-config\") pod \"15ee33a2-282c-4c83-aa73-1e1cbffab158\" (UID: \"15ee33a2-282c-4c83-aa73-1e1cbffab158\") " Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.594363 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ee33a2-282c-4c83-aa73-1e1cbffab158-kube-api-access-p7sn5" (OuterVolumeSpecName: "kube-api-access-p7sn5") pod "15ee33a2-282c-4c83-aa73-1e1cbffab158" (UID: "15ee33a2-282c-4c83-aa73-1e1cbffab158"). InnerVolumeSpecName "kube-api-access-p7sn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.630591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15ee33a2-282c-4c83-aa73-1e1cbffab158" (UID: "15ee33a2-282c-4c83-aa73-1e1cbffab158"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.635700 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-config" (OuterVolumeSpecName: "config") pod "15ee33a2-282c-4c83-aa73-1e1cbffab158" (UID: "15ee33a2-282c-4c83-aa73-1e1cbffab158"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.642823 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15ee33a2-282c-4c83-aa73-1e1cbffab158" (UID: "15ee33a2-282c-4c83-aa73-1e1cbffab158"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.657031 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:19:13 crc kubenswrapper[4858]: E1122 09:19:13.657428 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerName="init" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.657451 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerName="init" Nov 22 09:19:13 crc kubenswrapper[4858]: E1122 09:19:13.657509 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerName="dnsmasq-dns" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.657521 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerName="dnsmasq-dns" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.657775 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerName="dnsmasq-dns" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.658529 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.663251 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.676572 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7b14e62d-03f3-44cf-9b81-f5c0511865cd-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnqg\" (UniqueName: \"kubernetes.io/projected/7b14e62d-03f3-44cf-9b81-f5c0511865cd-kube-api-access-ghnqg\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691726 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691807 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691857 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7sn5\" (UniqueName: \"kubernetes.io/projected/15ee33a2-282c-4c83-aa73-1e1cbffab158-kube-api-access-p7sn5\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.691905 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ee33a2-282c-4c83-aa73-1e1cbffab158-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.793082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7b14e62d-03f3-44cf-9b81-f5c0511865cd-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.793190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnqg\" (UniqueName: \"kubernetes.io/projected/7b14e62d-03f3-44cf-9b81-f5c0511865cd-kube-api-access-ghnqg\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.793250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.796893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7b14e62d-03f3-44cf-9b81-f5c0511865cd-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.797700 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.797744 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f037cebece3abe674aadf5293a378beaa24bfac1b265d48201adc11ff332b55d/globalmount\"" pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.810202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnqg\" (UniqueName: \"kubernetes.io/projected/7b14e62d-03f3-44cf-9b81-f5c0511865cd-kube-api-access-ghnqg\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:13 crc kubenswrapper[4858]: I1122 09:19:13.827556 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") pod \"ovn-copy-data\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " pod="openstack/ovn-copy-data" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.003153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.048204 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" event={"ID":"a374cd19-18a6-4859-988f-3150a915ef2a","Type":"ContainerStarted","Data":"9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165"} Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.048511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.050510 4858 generic.go:334] "Generic (PLEG): container finished" podID="15ee33a2-282c-4c83-aa73-1e1cbffab158" containerID="290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa" exitCode=0 Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.050592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.050560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" event={"ID":"15ee33a2-282c-4c83-aa73-1e1cbffab158","Type":"ContainerDied","Data":"290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa"} Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.050753 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55cd558fc5-7b7d8" event={"ID":"15ee33a2-282c-4c83-aa73-1e1cbffab158","Type":"ContainerDied","Data":"a94158112a80fcc59518300acf51b8f3b48e675fa3044761571d400360c876d1"} Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.050786 4858 scope.go:117] "RemoveContainer" containerID="290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.080779 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" podStartSLOduration=3.080760267 podStartE2EDuration="3.080760267s" podCreationTimestamp="2025-11-22 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:14.075809899 +0000 UTC m=+7715.917232905" watchObservedRunningTime="2025-11-22 09:19:14.080760267 +0000 UTC m=+7715.922183273" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.097800 4858 scope.go:117] "RemoveContainer" containerID="38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.098381 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55cd558fc5-7b7d8"] Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.103787 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55cd558fc5-7b7d8"] Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.125609 4858 scope.go:117] "RemoveContainer" containerID="290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa" Nov 22 09:19:14 crc kubenswrapper[4858]: E1122 09:19:14.126220 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa\": container with ID starting with 290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa not found: ID does not exist" containerID="290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.126258 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa"} err="failed to get container status \"290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa\": rpc error: code = NotFound desc = could not find container \"290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa\": container with ID starting with 290fbe9aa52adec30bd8ed901dafd9cc1787e8922dfbefcd0f49e9e3ca0b16fa not found: ID does not exist" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.126287 4858 scope.go:117] "RemoveContainer" containerID="38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0" Nov 22 09:19:14 crc kubenswrapper[4858]: E1122 09:19:14.126584 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0\": container with ID starting with 38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0 not found: ID does not exist" containerID="38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0" Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.126614 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0"} err="failed to get container status \"38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0\": rpc error: code = NotFound desc = could not find container \"38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0\": container with ID starting with 38441bf5ed162cb1a0b2f6d0787a6d3c49766e22396574b9e1e5d92b45da68f0 not found: ID does not exist" Nov 22 09:19:14 crc kubenswrapper[4858]: W1122 09:19:14.550465 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b14e62d_03f3_44cf_9b81_f5c0511865cd.slice/crio-af2dc15b08c986658fceeb53e71d1e59e12e1725093b35dd9f183bdd1af449a8 WatchSource:0}: Error finding container af2dc15b08c986658fceeb53e71d1e59e12e1725093b35dd9f183bdd1af449a8: Status 404 returned error can't find the container with id af2dc15b08c986658fceeb53e71d1e59e12e1725093b35dd9f183bdd1af449a8 Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.550491 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:19:14 crc kubenswrapper[4858]: I1122 09:19:14.552353 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:19:15 crc kubenswrapper[4858]: I1122 09:19:15.058013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7b14e62d-03f3-44cf-9b81-f5c0511865cd","Type":"ContainerStarted","Data":"748c7fd5b8d2394c9cb02c31b7296c97713c51013b8ab56c9ede3e3f67b3d1dd"} Nov 22 09:19:15 crc kubenswrapper[4858]: I1122 09:19:15.058437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7b14e62d-03f3-44cf-9b81-f5c0511865cd","Type":"ContainerStarted","Data":"af2dc15b08c986658fceeb53e71d1e59e12e1725093b35dd9f183bdd1af449a8"} Nov 22 09:19:15 crc kubenswrapper[4858]: I1122 09:19:15.074810 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.885193422 podStartE2EDuration="3.074792069s" podCreationTimestamp="2025-11-22 09:19:12 +0000 UTC" firstStartedPulling="2025-11-22 09:19:14.552135243 +0000 UTC m=+7716.393558249" lastFinishedPulling="2025-11-22 09:19:14.74173389 +0000 UTC m=+7716.583156896" observedRunningTime="2025-11-22 09:19:15.070860353 +0000 UTC m=+7716.912283369" watchObservedRunningTime="2025-11-22 09:19:15.074792069 +0000 UTC m=+7716.916215075" Nov 22 09:19:15 crc kubenswrapper[4858]: I1122 09:19:15.311763 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:19:15 crc kubenswrapper[4858]: I1122 09:19:15.311833 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:19:15 crc kubenswrapper[4858]: I1122 09:19:15.546207 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ee33a2-282c-4c83-aa73-1e1cbffab158" path="/var/lib/kubelet/pods/15ee33a2-282c-4c83-aa73-1e1cbffab158/volumes" Nov 22 09:19:21 crc kubenswrapper[4858]: I1122 09:19:21.617603 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:21 crc kubenswrapper[4858]: I1122 09:19:21.705959 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-ff6cj"] Nov 22 09:19:21 crc kubenswrapper[4858]: I1122 09:19:21.706227 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerName="dnsmasq-dns" containerID="cri-o://66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1" gracePeriod=10 Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.151999 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.152691 4858 generic.go:334] "Generic (PLEG): container finished" podID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerID="66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1" exitCode=0 Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.152744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" event={"ID":"a8138fdb-6c2b-443c-860a-f2fbc31b04b9","Type":"ContainerDied","Data":"66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1"} Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.152786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" event={"ID":"a8138fdb-6c2b-443c-860a-f2fbc31b04b9","Type":"ContainerDied","Data":"998a37e05556e7f2390ddd1bfb4230e6700e84eab75fbedff1472d107aa3f4a7"} Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.152809 4858 scope.go:117] "RemoveContainer" containerID="66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.171749 4858 scope.go:117] "RemoveContainer" containerID="7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.196303 4858 scope.go:117] "RemoveContainer" containerID="66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1" Nov 22 09:19:22 crc kubenswrapper[4858]: E1122 09:19:22.196915 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1\": container with ID starting with 66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1 not found: ID does not exist" containerID="66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.196961 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1"} err="failed to get container status \"66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1\": rpc error: code = NotFound desc = could not find container \"66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1\": container with ID starting with 66ebaac6468dcbd8179ff631773755b18785bc7114a4d69f6094d683471f0ab1 not found: ID does not exist" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.196991 4858 scope.go:117] "RemoveContainer" containerID="7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e" Nov 22 09:19:22 crc kubenswrapper[4858]: E1122 09:19:22.197390 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e\": container with ID starting with 7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e not found: ID does not exist" containerID="7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.197411 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e"} err="failed to get container status \"7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e\": rpc error: code = NotFound desc = could not find container \"7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e\": container with ID starting with 7a71778f4f3856515daa123d378ae79ca19aaee462113f7ed104fbfa19c3721e not found: ID does not exist" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.320756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-config\") pod \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.320803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmnk\" (UniqueName: \"kubernetes.io/projected/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-kube-api-access-qbmnk\") pod \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.320859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-dns-svc\") pod \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\" (UID: \"a8138fdb-6c2b-443c-860a-f2fbc31b04b9\") " Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.331493 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-kube-api-access-qbmnk" (OuterVolumeSpecName: "kube-api-access-qbmnk") pod "a8138fdb-6c2b-443c-860a-f2fbc31b04b9" (UID: "a8138fdb-6c2b-443c-860a-f2fbc31b04b9"). InnerVolumeSpecName "kube-api-access-qbmnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.359936 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8138fdb-6c2b-443c-860a-f2fbc31b04b9" (UID: "a8138fdb-6c2b-443c-860a-f2fbc31b04b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.365000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-config" (OuterVolumeSpecName: "config") pod "a8138fdb-6c2b-443c-860a-f2fbc31b04b9" (UID: "a8138fdb-6c2b-443c-860a-f2fbc31b04b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.423007 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.423043 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbmnk\" (UniqueName: \"kubernetes.io/projected/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-kube-api-access-qbmnk\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:22 crc kubenswrapper[4858]: I1122 09:19:22.423055 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8138fdb-6c2b-443c-860a-f2fbc31b04b9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.035263 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 09:19:23 crc kubenswrapper[4858]: E1122 09:19:23.035633 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerName="init" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.035652 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerName="init" Nov 22 09:19:23 crc kubenswrapper[4858]: E1122 09:19:23.035665 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerName="dnsmasq-dns" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.035673 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerName="dnsmasq-dns" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.035823 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" containerName="dnsmasq-dns" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.036643 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.039876 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ts2z2" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.040772 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.040925 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.043131 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.050638 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.133682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-scripts\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.133742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-config\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.133924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.134091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbbq\" (UniqueName: \"kubernetes.io/projected/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-kube-api-access-qnbbq\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.134211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.134300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.134644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.162794 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-ff6cj" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.197758 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-ff6cj"] Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.203476 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-ff6cj"] Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-config\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbbq\" (UniqueName: \"kubernetes.io/projected/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-kube-api-access-qnbbq\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.236512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-scripts\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.237180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-scripts\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.237210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-config\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.237464 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.242306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.242908 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.244742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.271800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbbq\" (UniqueName: \"kubernetes.io/projected/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-kube-api-access-qnbbq\") pod \"ovn-northd-0\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.363385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.546679 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8138fdb-6c2b-443c-860a-f2fbc31b04b9" path="/var/lib/kubelet/pods/a8138fdb-6c2b-443c-860a-f2fbc31b04b9/volumes" Nov 22 09:19:23 crc kubenswrapper[4858]: I1122 09:19:23.812081 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 09:19:24 crc kubenswrapper[4858]: I1122 09:19:24.173287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4","Type":"ContainerStarted","Data":"75690b4e9af7a56e00f17a58b8d1e76359ca5b7d031ccb11514048168edcd317"} Nov 22 09:19:25 crc kubenswrapper[4858]: I1122 09:19:25.182670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4","Type":"ContainerStarted","Data":"67e1251b14e7b433d9d3a2e216ea575663775c6d203919cad452b0e46788dce2"} Nov 22 09:19:25 crc kubenswrapper[4858]: I1122 09:19:25.183166 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 09:19:25 crc kubenswrapper[4858]: I1122 09:19:25.183182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4","Type":"ContainerStarted","Data":"07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f"} Nov 22 09:19:25 crc kubenswrapper[4858]: I1122 09:19:25.206790 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.415541788 podStartE2EDuration="2.206768659s" podCreationTimestamp="2025-11-22 09:19:23 +0000 UTC" firstStartedPulling="2025-11-22 09:19:23.823847782 +0000 UTC m=+7725.665270788" lastFinishedPulling="2025-11-22 09:19:24.615074653 +0000 UTC m=+7726.456497659" observedRunningTime="2025-11-22 09:19:25.202959447 +0000 UTC m=+7727.044382463" watchObservedRunningTime="2025-11-22 09:19:25.206768659 +0000 UTC m=+7727.048191665" Nov 22 09:19:30 crc kubenswrapper[4858]: I1122 09:19:30.917053 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8k55m"] Nov 22 09:19:30 crc kubenswrapper[4858]: I1122 09:19:30.918653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:30 crc kubenswrapper[4858]: I1122 09:19:30.936567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8k55m"] Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.009997 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9b4b-account-create-zc56d"] Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.010996 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.012782 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.037007 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9b4b-account-create-zc56d"] Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.068277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbwf\" (UniqueName: \"kubernetes.io/projected/cba61de2-085b-4ec5-ab7a-08e789be3bfc-kube-api-access-dvbwf\") pod \"keystone-db-create-8k55m\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.068332 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba61de2-085b-4ec5-ab7a-08e789be3bfc-operator-scripts\") pod \"keystone-db-create-8k55m\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.169708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba61de2-085b-4ec5-ab7a-08e789be3bfc-operator-scripts\") pod \"keystone-db-create-8k55m\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.169771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-operator-scripts\") pod \"keystone-9b4b-account-create-zc56d\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.169811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkc4\" (UniqueName: \"kubernetes.io/projected/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-kube-api-access-zqkc4\") pod \"keystone-9b4b-account-create-zc56d\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.169952 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbwf\" (UniqueName: \"kubernetes.io/projected/cba61de2-085b-4ec5-ab7a-08e789be3bfc-kube-api-access-dvbwf\") pod \"keystone-db-create-8k55m\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.170542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba61de2-085b-4ec5-ab7a-08e789be3bfc-operator-scripts\") pod \"keystone-db-create-8k55m\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.192789 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbwf\" (UniqueName: \"kubernetes.io/projected/cba61de2-085b-4ec5-ab7a-08e789be3bfc-kube-api-access-dvbwf\") pod \"keystone-db-create-8k55m\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.235352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.271287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-operator-scripts\") pod \"keystone-9b4b-account-create-zc56d\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.271935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkc4\" (UniqueName: \"kubernetes.io/projected/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-kube-api-access-zqkc4\") pod \"keystone-9b4b-account-create-zc56d\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.272276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-operator-scripts\") pod \"keystone-9b4b-account-create-zc56d\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.292641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkc4\" (UniqueName: \"kubernetes.io/projected/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-kube-api-access-zqkc4\") pod \"keystone-9b4b-account-create-zc56d\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.340356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.669990 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8k55m"] Nov 22 09:19:31 crc kubenswrapper[4858]: W1122 09:19:31.675240 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcba61de2_085b_4ec5_ab7a_08e789be3bfc.slice/crio-748d0af88a833c9650f29837b69563f43eebe24c6339dddd8a7e7fa4e57c5a9f WatchSource:0}: Error finding container 748d0af88a833c9650f29837b69563f43eebe24c6339dddd8a7e7fa4e57c5a9f: Status 404 returned error can't find the container with id 748d0af88a833c9650f29837b69563f43eebe24c6339dddd8a7e7fa4e57c5a9f Nov 22 09:19:31 crc kubenswrapper[4858]: I1122 09:19:31.790570 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9b4b-account-create-zc56d"] Nov 22 09:19:31 crc kubenswrapper[4858]: W1122 09:19:31.796493 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855716ba_1fbf_4f5f_9f67_0a80465ebe0a.slice/crio-2980b4339198eb9fa9de2824f2cf77543c88e831868d6a093e73eb9b6559a54f WatchSource:0}: Error finding container 2980b4339198eb9fa9de2824f2cf77543c88e831868d6a093e73eb9b6559a54f: Status 404 returned error can't find the container with id 2980b4339198eb9fa9de2824f2cf77543c88e831868d6a093e73eb9b6559a54f Nov 22 09:19:32 crc kubenswrapper[4858]: I1122 09:19:32.236908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8k55m" event={"ID":"cba61de2-085b-4ec5-ab7a-08e789be3bfc","Type":"ContainerStarted","Data":"b2a538c5e5d4be370d699ca1781b6a69ff6d406f53969c96747dfc641f02d840"} Nov 22 09:19:32 crc kubenswrapper[4858]: I1122 09:19:32.236965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8k55m" event={"ID":"cba61de2-085b-4ec5-ab7a-08e789be3bfc","Type":"ContainerStarted","Data":"748d0af88a833c9650f29837b69563f43eebe24c6339dddd8a7e7fa4e57c5a9f"} Nov 22 09:19:32 crc kubenswrapper[4858]: I1122 09:19:32.238247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b4b-account-create-zc56d" event={"ID":"855716ba-1fbf-4f5f-9f67-0a80465ebe0a","Type":"ContainerStarted","Data":"af372afad967b23f67cf29821ca97e5317e9a9c4df206a68425c39a7708ca250"} Nov 22 09:19:32 crc kubenswrapper[4858]: I1122 09:19:32.238296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b4b-account-create-zc56d" event={"ID":"855716ba-1fbf-4f5f-9f67-0a80465ebe0a","Type":"ContainerStarted","Data":"2980b4339198eb9fa9de2824f2cf77543c88e831868d6a093e73eb9b6559a54f"} Nov 22 09:19:32 crc kubenswrapper[4858]: I1122 09:19:32.255934 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-8k55m" podStartSLOduration=2.255886079 podStartE2EDuration="2.255886079s" podCreationTimestamp="2025-11-22 09:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:32.249827105 +0000 UTC m=+7734.091250111" watchObservedRunningTime="2025-11-22 09:19:32.255886079 +0000 UTC m=+7734.097309085" Nov 22 09:19:32 crc kubenswrapper[4858]: I1122 09:19:32.265976 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9b4b-account-create-zc56d" podStartSLOduration=2.265952981 podStartE2EDuration="2.265952981s" podCreationTimestamp="2025-11-22 09:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:32.263055978 +0000 UTC m=+7734.104478984" watchObservedRunningTime="2025-11-22 09:19:32.265952981 +0000 UTC m=+7734.107375987" Nov 22 09:19:33 crc kubenswrapper[4858]: I1122 09:19:33.246816 4858 generic.go:334] "Generic (PLEG): container finished" podID="855716ba-1fbf-4f5f-9f67-0a80465ebe0a" containerID="af372afad967b23f67cf29821ca97e5317e9a9c4df206a68425c39a7708ca250" exitCode=0 Nov 22 09:19:33 crc kubenswrapper[4858]: I1122 09:19:33.246867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b4b-account-create-zc56d" event={"ID":"855716ba-1fbf-4f5f-9f67-0a80465ebe0a","Type":"ContainerDied","Data":"af372afad967b23f67cf29821ca97e5317e9a9c4df206a68425c39a7708ca250"} Nov 22 09:19:33 crc kubenswrapper[4858]: I1122 09:19:33.248982 4858 generic.go:334] "Generic (PLEG): container finished" podID="cba61de2-085b-4ec5-ab7a-08e789be3bfc" containerID="b2a538c5e5d4be370d699ca1781b6a69ff6d406f53969c96747dfc641f02d840" exitCode=0 Nov 22 09:19:33 crc kubenswrapper[4858]: I1122 09:19:33.249025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8k55m" event={"ID":"cba61de2-085b-4ec5-ab7a-08e789be3bfc","Type":"ContainerDied","Data":"b2a538c5e5d4be370d699ca1781b6a69ff6d406f53969c96747dfc641f02d840"} Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.685245 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.691954 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.833386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvbwf\" (UniqueName: \"kubernetes.io/projected/cba61de2-085b-4ec5-ab7a-08e789be3bfc-kube-api-access-dvbwf\") pod \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.833474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqkc4\" (UniqueName: \"kubernetes.io/projected/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-kube-api-access-zqkc4\") pod \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.833506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba61de2-085b-4ec5-ab7a-08e789be3bfc-operator-scripts\") pod \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\" (UID: \"cba61de2-085b-4ec5-ab7a-08e789be3bfc\") " Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.833614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-operator-scripts\") pod \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\" (UID: \"855716ba-1fbf-4f5f-9f67-0a80465ebe0a\") " Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.834264 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "855716ba-1fbf-4f5f-9f67-0a80465ebe0a" (UID: "855716ba-1fbf-4f5f-9f67-0a80465ebe0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.837906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cba61de2-085b-4ec5-ab7a-08e789be3bfc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cba61de2-085b-4ec5-ab7a-08e789be3bfc" (UID: "cba61de2-085b-4ec5-ab7a-08e789be3bfc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.840502 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba61de2-085b-4ec5-ab7a-08e789be3bfc-kube-api-access-dvbwf" (OuterVolumeSpecName: "kube-api-access-dvbwf") pod "cba61de2-085b-4ec5-ab7a-08e789be3bfc" (UID: "cba61de2-085b-4ec5-ab7a-08e789be3bfc"). InnerVolumeSpecName "kube-api-access-dvbwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.841457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-kube-api-access-zqkc4" (OuterVolumeSpecName: "kube-api-access-zqkc4") pod "855716ba-1fbf-4f5f-9f67-0a80465ebe0a" (UID: "855716ba-1fbf-4f5f-9f67-0a80465ebe0a"). InnerVolumeSpecName "kube-api-access-zqkc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:34 crc kubenswrapper[4858]: E1122 09:19:34.897827 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.935472 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvbwf\" (UniqueName: \"kubernetes.io/projected/cba61de2-085b-4ec5-ab7a-08e789be3bfc-kube-api-access-dvbwf\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.935792 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqkc4\" (UniqueName: \"kubernetes.io/projected/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-kube-api-access-zqkc4\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.935806 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba61de2-085b-4ec5-ab7a-08e789be3bfc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:34 crc kubenswrapper[4858]: I1122 09:19:34.935816 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/855716ba-1fbf-4f5f-9f67-0a80465ebe0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:35 crc kubenswrapper[4858]: I1122 09:19:35.275649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9b4b-account-create-zc56d" event={"ID":"855716ba-1fbf-4f5f-9f67-0a80465ebe0a","Type":"ContainerDied","Data":"2980b4339198eb9fa9de2824f2cf77543c88e831868d6a093e73eb9b6559a54f"} Nov 22 09:19:35 crc kubenswrapper[4858]: I1122 09:19:35.275861 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2980b4339198eb9fa9de2824f2cf77543c88e831868d6a093e73eb9b6559a54f" Nov 22 09:19:35 crc kubenswrapper[4858]: I1122 09:19:35.275963 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9b4b-account-create-zc56d" Nov 22 09:19:35 crc kubenswrapper[4858]: I1122 09:19:35.287381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8k55m" event={"ID":"cba61de2-085b-4ec5-ab7a-08e789be3bfc","Type":"ContainerDied","Data":"748d0af88a833c9650f29837b69563f43eebe24c6339dddd8a7e7fa4e57c5a9f"} Nov 22 09:19:35 crc kubenswrapper[4858]: I1122 09:19:35.288320 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="748d0af88a833c9650f29837b69563f43eebe24c6339dddd8a7e7fa4e57c5a9f" Nov 22 09:19:35 crc kubenswrapper[4858]: I1122 09:19:35.288465 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8k55m" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.506142 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6nh8p"] Nov 22 09:19:36 crc kubenswrapper[4858]: E1122 09:19:36.507099 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba61de2-085b-4ec5-ab7a-08e789be3bfc" containerName="mariadb-database-create" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.507146 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba61de2-085b-4ec5-ab7a-08e789be3bfc" containerName="mariadb-database-create" Nov 22 09:19:36 crc kubenswrapper[4858]: E1122 09:19:36.507159 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855716ba-1fbf-4f5f-9f67-0a80465ebe0a" containerName="mariadb-account-create" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.507168 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="855716ba-1fbf-4f5f-9f67-0a80465ebe0a" containerName="mariadb-account-create" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.507384 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba61de2-085b-4ec5-ab7a-08e789be3bfc" containerName="mariadb-database-create" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.507405 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="855716ba-1fbf-4f5f-9f67-0a80465ebe0a" containerName="mariadb-account-create" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.508261 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.510784 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.510783 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nhhkf" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.511019 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.515169 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.516102 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6nh8p"] Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.664027 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zks74\" (UniqueName: \"kubernetes.io/projected/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-kube-api-access-zks74\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.664457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-config-data\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.664560 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-combined-ca-bundle\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.766448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-combined-ca-bundle\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.766520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zks74\" (UniqueName: \"kubernetes.io/projected/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-kube-api-access-zks74\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.766570 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-config-data\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.773198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-combined-ca-bundle\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.782243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-config-data\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.785448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zks74\" (UniqueName: \"kubernetes.io/projected/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-kube-api-access-zks74\") pod \"keystone-db-sync-6nh8p\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:36 crc kubenswrapper[4858]: I1122 09:19:36.825229 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:37 crc kubenswrapper[4858]: I1122 09:19:37.301724 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6nh8p"] Nov 22 09:19:38 crc kubenswrapper[4858]: I1122 09:19:38.313959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6nh8p" event={"ID":"d7f800d9-b999-46fe-b9ab-4bac8356fcdd","Type":"ContainerStarted","Data":"a1c3b2cd7e14b51503b94c4abd1a491ee411a85c612041d8b54405fe61b6aa1d"} Nov 22 09:19:38 crc kubenswrapper[4858]: I1122 09:19:38.423528 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 09:19:42 crc kubenswrapper[4858]: I1122 09:19:42.354598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6nh8p" event={"ID":"d7f800d9-b999-46fe-b9ab-4bac8356fcdd","Type":"ContainerStarted","Data":"0b980d9c4b159ec4c43ebc38b297c40800977ddefcee229b71207360876dfad2"} Nov 22 09:19:42 crc kubenswrapper[4858]: I1122 09:19:42.373591 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6nh8p" podStartSLOduration=1.739195869 podStartE2EDuration="6.373573411s" podCreationTimestamp="2025-11-22 09:19:36 +0000 UTC" firstStartedPulling="2025-11-22 09:19:37.308383602 +0000 UTC m=+7739.149806618" lastFinishedPulling="2025-11-22 09:19:41.942761154 +0000 UTC m=+7743.784184160" observedRunningTime="2025-11-22 09:19:42.371306499 +0000 UTC m=+7744.212729505" watchObservedRunningTime="2025-11-22 09:19:42.373573411 +0000 UTC m=+7744.214996417" Nov 22 09:19:44 crc kubenswrapper[4858]: I1122 09:19:44.371200 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7f800d9-b999-46fe-b9ab-4bac8356fcdd" containerID="0b980d9c4b159ec4c43ebc38b297c40800977ddefcee229b71207360876dfad2" exitCode=0 Nov 22 09:19:44 crc kubenswrapper[4858]: I1122 09:19:44.371277 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6nh8p" event={"ID":"d7f800d9-b999-46fe-b9ab-4bac8356fcdd","Type":"ContainerDied","Data":"0b980d9c4b159ec4c43ebc38b297c40800977ddefcee229b71207360876dfad2"} Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.312648 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.313042 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.737601 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.834959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-config-data\") pod \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.835100 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-combined-ca-bundle\") pod \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.835147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zks74\" (UniqueName: \"kubernetes.io/projected/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-kube-api-access-zks74\") pod \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\" (UID: \"d7f800d9-b999-46fe-b9ab-4bac8356fcdd\") " Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.844154 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-kube-api-access-zks74" (OuterVolumeSpecName: "kube-api-access-zks74") pod "d7f800d9-b999-46fe-b9ab-4bac8356fcdd" (UID: "d7f800d9-b999-46fe-b9ab-4bac8356fcdd"). InnerVolumeSpecName "kube-api-access-zks74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.872170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7f800d9-b999-46fe-b9ab-4bac8356fcdd" (UID: "d7f800d9-b999-46fe-b9ab-4bac8356fcdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.898884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-config-data" (OuterVolumeSpecName: "config-data") pod "d7f800d9-b999-46fe-b9ab-4bac8356fcdd" (UID: "d7f800d9-b999-46fe-b9ab-4bac8356fcdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.937511 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.937548 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zks74\" (UniqueName: \"kubernetes.io/projected/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-kube-api-access-zks74\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:45 crc kubenswrapper[4858]: I1122 09:19:45.937562 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f800d9-b999-46fe-b9ab-4bac8356fcdd-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.397798 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6nh8p" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.398699 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6nh8p" event={"ID":"d7f800d9-b999-46fe-b9ab-4bac8356fcdd","Type":"ContainerDied","Data":"a1c3b2cd7e14b51503b94c4abd1a491ee411a85c612041d8b54405fe61b6aa1d"} Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.398772 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c3b2cd7e14b51503b94c4abd1a491ee411a85c612041d8b54405fe61b6aa1d" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.560250 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fc7844867-d74sf"] Nov 22 09:19:46 crc kubenswrapper[4858]: E1122 09:19:46.561155 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f800d9-b999-46fe-b9ab-4bac8356fcdd" containerName="keystone-db-sync" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.561182 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f800d9-b999-46fe-b9ab-4bac8356fcdd" containerName="keystone-db-sync" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.561412 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f800d9-b999-46fe-b9ab-4bac8356fcdd" containerName="keystone-db-sync" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.562543 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.579928 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc7844867-d74sf"] Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.641112 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sjwrx"] Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.642166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.645154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.645481 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.645652 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.645873 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.646117 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nhhkf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.649189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-config\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.649241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-dns-svc\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.649276 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfdz\" (UniqueName: \"kubernetes.io/projected/253654e9-90fa-4cfd-ac60-8b67c1c1b419-kube-api-access-gzfdz\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.649359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.649403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.697022 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sjwrx"] Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.750767 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.750831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-config-data\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.750869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.750954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-config\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.750989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-scripts\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.751015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-dns-svc\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.751093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfdz\" (UniqueName: \"kubernetes.io/projected/253654e9-90fa-4cfd-ac60-8b67c1c1b419-kube-api-access-gzfdz\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.751128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-combined-ca-bundle\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.751153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27cz\" (UniqueName: \"kubernetes.io/projected/0825d126-bc08-4de1-954f-af3e965a6a89-kube-api-access-l27cz\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.751211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-credential-keys\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.751392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-fernet-keys\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.752797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.752822 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.753040 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-dns-svc\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.753099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-config\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.773297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfdz\" (UniqueName: \"kubernetes.io/projected/253654e9-90fa-4cfd-ac60-8b67c1c1b419-kube-api-access-gzfdz\") pod \"dnsmasq-dns-7fc7844867-d74sf\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.852716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-scripts\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.852969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-combined-ca-bundle\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.853097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l27cz\" (UniqueName: \"kubernetes.io/projected/0825d126-bc08-4de1-954f-af3e965a6a89-kube-api-access-l27cz\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.853209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-credential-keys\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.853285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-fernet-keys\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.853396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-config-data\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.856750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-config-data\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.858778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-scripts\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.858840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-combined-ca-bundle\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.859115 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-fernet-keys\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.859370 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-credential-keys\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.869923 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l27cz\" (UniqueName: \"kubernetes.io/projected/0825d126-bc08-4de1-954f-af3e965a6a89-kube-api-access-l27cz\") pod \"keystone-bootstrap-sjwrx\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.944140 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:46 crc kubenswrapper[4858]: I1122 09:19:46.966935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:47 crc kubenswrapper[4858]: I1122 09:19:47.413937 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc7844867-d74sf"] Nov 22 09:19:47 crc kubenswrapper[4858]: W1122 09:19:47.418165 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod253654e9_90fa_4cfd_ac60_8b67c1c1b419.slice/crio-ac58979407b4f3623574241ce36762aef236be06b53c1fecd52a381035cbc644 WatchSource:0}: Error finding container ac58979407b4f3623574241ce36762aef236be06b53c1fecd52a381035cbc644: Status 404 returned error can't find the container with id ac58979407b4f3623574241ce36762aef236be06b53c1fecd52a381035cbc644 Nov 22 09:19:47 crc kubenswrapper[4858]: I1122 09:19:47.496902 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sjwrx"] Nov 22 09:19:47 crc kubenswrapper[4858]: W1122 09:19:47.501687 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0825d126_bc08_4de1_954f_af3e965a6a89.slice/crio-7203a2a8ad204723309f2e3f7555dea5b6b3c7d1511d6cf72dcdec6eac78a6d8 WatchSource:0}: Error finding container 7203a2a8ad204723309f2e3f7555dea5b6b3c7d1511d6cf72dcdec6eac78a6d8: Status 404 returned error can't find the container with id 7203a2a8ad204723309f2e3f7555dea5b6b3c7d1511d6cf72dcdec6eac78a6d8 Nov 22 09:19:48 crc kubenswrapper[4858]: I1122 09:19:48.431373 4858 generic.go:334] "Generic (PLEG): container finished" podID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerID="ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a" exitCode=0 Nov 22 09:19:48 crc kubenswrapper[4858]: I1122 09:19:48.431427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" event={"ID":"253654e9-90fa-4cfd-ac60-8b67c1c1b419","Type":"ContainerDied","Data":"ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a"} Nov 22 09:19:48 crc kubenswrapper[4858]: I1122 09:19:48.432031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" event={"ID":"253654e9-90fa-4cfd-ac60-8b67c1c1b419","Type":"ContainerStarted","Data":"ac58979407b4f3623574241ce36762aef236be06b53c1fecd52a381035cbc644"} Nov 22 09:19:48 crc kubenswrapper[4858]: I1122 09:19:48.490631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sjwrx" event={"ID":"0825d126-bc08-4de1-954f-af3e965a6a89","Type":"ContainerStarted","Data":"d25cdb2a71495a0a4e14f64b067bed818d184d245e7ba480e82f9b14c1dd8c9d"} Nov 22 09:19:48 crc kubenswrapper[4858]: I1122 09:19:48.490999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sjwrx" event={"ID":"0825d126-bc08-4de1-954f-af3e965a6a89","Type":"ContainerStarted","Data":"7203a2a8ad204723309f2e3f7555dea5b6b3c7d1511d6cf72dcdec6eac78a6d8"} Nov 22 09:19:48 crc kubenswrapper[4858]: I1122 09:19:48.513138 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sjwrx" podStartSLOduration=2.513118163 podStartE2EDuration="2.513118163s" podCreationTimestamp="2025-11-22 09:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:48.510791648 +0000 UTC m=+7750.352214654" watchObservedRunningTime="2025-11-22 09:19:48.513118163 +0000 UTC m=+7750.354541189" Nov 22 09:19:49 crc kubenswrapper[4858]: I1122 09:19:49.502656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" event={"ID":"253654e9-90fa-4cfd-ac60-8b67c1c1b419","Type":"ContainerStarted","Data":"dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee"} Nov 22 09:19:49 crc kubenswrapper[4858]: I1122 09:19:49.503626 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:49 crc kubenswrapper[4858]: I1122 09:19:49.522931 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" podStartSLOduration=3.522916619 podStartE2EDuration="3.522916619s" podCreationTimestamp="2025-11-22 09:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:49.521928847 +0000 UTC m=+7751.363351863" watchObservedRunningTime="2025-11-22 09:19:49.522916619 +0000 UTC m=+7751.364339625" Nov 22 09:19:51 crc kubenswrapper[4858]: I1122 09:19:51.515352 4858 generic.go:334] "Generic (PLEG): container finished" podID="0825d126-bc08-4de1-954f-af3e965a6a89" containerID="d25cdb2a71495a0a4e14f64b067bed818d184d245e7ba480e82f9b14c1dd8c9d" exitCode=0 Nov 22 09:19:51 crc kubenswrapper[4858]: I1122 09:19:51.515438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sjwrx" event={"ID":"0825d126-bc08-4de1-954f-af3e965a6a89","Type":"ContainerDied","Data":"d25cdb2a71495a0a4e14f64b067bed818d184d245e7ba480e82f9b14c1dd8c9d"} Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.879455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.959781 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l27cz\" (UniqueName: \"kubernetes.io/projected/0825d126-bc08-4de1-954f-af3e965a6a89-kube-api-access-l27cz\") pod \"0825d126-bc08-4de1-954f-af3e965a6a89\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.959911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-combined-ca-bundle\") pod \"0825d126-bc08-4de1-954f-af3e965a6a89\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.959980 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-scripts\") pod \"0825d126-bc08-4de1-954f-af3e965a6a89\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.960036 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-fernet-keys\") pod \"0825d126-bc08-4de1-954f-af3e965a6a89\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.960071 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-credential-keys\") pod \"0825d126-bc08-4de1-954f-af3e965a6a89\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.960109 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-config-data\") pod \"0825d126-bc08-4de1-954f-af3e965a6a89\" (UID: \"0825d126-bc08-4de1-954f-af3e965a6a89\") " Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.970410 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0825d126-bc08-4de1-954f-af3e965a6a89" (UID: "0825d126-bc08-4de1-954f-af3e965a6a89"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.970940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0825d126-bc08-4de1-954f-af3e965a6a89" (UID: "0825d126-bc08-4de1-954f-af3e965a6a89"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.972808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-scripts" (OuterVolumeSpecName: "scripts") pod "0825d126-bc08-4de1-954f-af3e965a6a89" (UID: "0825d126-bc08-4de1-954f-af3e965a6a89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.975145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0825d126-bc08-4de1-954f-af3e965a6a89-kube-api-access-l27cz" (OuterVolumeSpecName: "kube-api-access-l27cz") pod "0825d126-bc08-4de1-954f-af3e965a6a89" (UID: "0825d126-bc08-4de1-954f-af3e965a6a89"). InnerVolumeSpecName "kube-api-access-l27cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.991723 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-config-data" (OuterVolumeSpecName: "config-data") pod "0825d126-bc08-4de1-954f-af3e965a6a89" (UID: "0825d126-bc08-4de1-954f-af3e965a6a89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:52 crc kubenswrapper[4858]: I1122 09:19:52.996584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0825d126-bc08-4de1-954f-af3e965a6a89" (UID: "0825d126-bc08-4de1-954f-af3e965a6a89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.063311 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l27cz\" (UniqueName: \"kubernetes.io/projected/0825d126-bc08-4de1-954f-af3e965a6a89-kube-api-access-l27cz\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.063981 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.063995 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.064024 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.064034 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.064042 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0825d126-bc08-4de1-954f-af3e965a6a89-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.536035 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sjwrx" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.565312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sjwrx" event={"ID":"0825d126-bc08-4de1-954f-af3e965a6a89","Type":"ContainerDied","Data":"7203a2a8ad204723309f2e3f7555dea5b6b3c7d1511d6cf72dcdec6eac78a6d8"} Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.565383 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7203a2a8ad204723309f2e3f7555dea5b6b3c7d1511d6cf72dcdec6eac78a6d8" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.635302 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-sjwrx"] Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.641687 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-sjwrx"] Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.698174 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gmjl5"] Nov 22 09:19:53 crc kubenswrapper[4858]: E1122 09:19:53.698565 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0825d126-bc08-4de1-954f-af3e965a6a89" containerName="keystone-bootstrap" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.698591 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0825d126-bc08-4de1-954f-af3e965a6a89" containerName="keystone-bootstrap" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.698804 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0825d126-bc08-4de1-954f-af3e965a6a89" containerName="keystone-bootstrap" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.699852 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.701721 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.701922 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nhhkf" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.702417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.702559 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.702606 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.711289 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gmjl5"] Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.779254 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-config-data\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.779345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-fernet-keys\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.779379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/18753f8d-6d51-430b-aa62-f9ee41cf917c-kube-api-access-q8dqt\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.779495 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-scripts\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.779567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-credential-keys\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.779723 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-combined-ca-bundle\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.880945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-scripts\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.881008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-credential-keys\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.881072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-combined-ca-bundle\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.881129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-config-data\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.881163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-fernet-keys\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.881193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/18753f8d-6d51-430b-aa62-f9ee41cf917c-kube-api-access-q8dqt\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.887735 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-scripts\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.888470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-fernet-keys\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.889819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-credential-keys\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.889902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-config-data\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.895648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-combined-ca-bundle\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:53 crc kubenswrapper[4858]: I1122 09:19:53.898579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/18753f8d-6d51-430b-aa62-f9ee41cf917c-kube-api-access-q8dqt\") pod \"keystone-bootstrap-gmjl5\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:54 crc kubenswrapper[4858]: I1122 09:19:54.057804 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:54 crc kubenswrapper[4858]: I1122 09:19:54.502632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gmjl5"] Nov 22 09:19:54 crc kubenswrapper[4858]: W1122 09:19:54.508671 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18753f8d_6d51_430b_aa62_f9ee41cf917c.slice/crio-0592f3511c7fa1b610dc6b0107171a931b9927e202a2708f3936978390b3301c WatchSource:0}: Error finding container 0592f3511c7fa1b610dc6b0107171a931b9927e202a2708f3936978390b3301c: Status 404 returned error can't find the container with id 0592f3511c7fa1b610dc6b0107171a931b9927e202a2708f3936978390b3301c Nov 22 09:19:54 crc kubenswrapper[4858]: I1122 09:19:54.545928 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gmjl5" event={"ID":"18753f8d-6d51-430b-aa62-f9ee41cf917c","Type":"ContainerStarted","Data":"0592f3511c7fa1b610dc6b0107171a931b9927e202a2708f3936978390b3301c"} Nov 22 09:19:55 crc kubenswrapper[4858]: I1122 09:19:55.547384 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0825d126-bc08-4de1-954f-af3e965a6a89" path="/var/lib/kubelet/pods/0825d126-bc08-4de1-954f-af3e965a6a89/volumes" Nov 22 09:19:55 crc kubenswrapper[4858]: I1122 09:19:55.558294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gmjl5" event={"ID":"18753f8d-6d51-430b-aa62-f9ee41cf917c","Type":"ContainerStarted","Data":"fcff5d0ddcbefca0ce9e1379c01065d38ff4d96407cf269b7ffd530f8012bc38"} Nov 22 09:19:55 crc kubenswrapper[4858]: I1122 09:19:55.578968 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gmjl5" podStartSLOduration=2.578924697 podStartE2EDuration="2.578924697s" podCreationTimestamp="2025-11-22 09:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:19:55.574539586 +0000 UTC m=+7757.415962622" watchObservedRunningTime="2025-11-22 09:19:55.578924697 +0000 UTC m=+7757.420347703" Nov 22 09:19:56 crc kubenswrapper[4858]: I1122 09:19:56.946616 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.013976 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbd96959f-cfv26"] Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.014540 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" containerName="dnsmasq-dns" containerID="cri-o://9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165" gracePeriod=10 Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.435489 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.548778 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-945s6\" (UniqueName: \"kubernetes.io/projected/a374cd19-18a6-4859-988f-3150a915ef2a-kube-api-access-945s6\") pod \"a374cd19-18a6-4859-988f-3150a915ef2a\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.548818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-sb\") pod \"a374cd19-18a6-4859-988f-3150a915ef2a\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.548889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-nb\") pod \"a374cd19-18a6-4859-988f-3150a915ef2a\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.548961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-dns-svc\") pod \"a374cd19-18a6-4859-988f-3150a915ef2a\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.549039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-config\") pod \"a374cd19-18a6-4859-988f-3150a915ef2a\" (UID: \"a374cd19-18a6-4859-988f-3150a915ef2a\") " Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.554703 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a374cd19-18a6-4859-988f-3150a915ef2a-kube-api-access-945s6" (OuterVolumeSpecName: "kube-api-access-945s6") pod "a374cd19-18a6-4859-988f-3150a915ef2a" (UID: "a374cd19-18a6-4859-988f-3150a915ef2a"). InnerVolumeSpecName "kube-api-access-945s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.582605 4858 generic.go:334] "Generic (PLEG): container finished" podID="18753f8d-6d51-430b-aa62-f9ee41cf917c" containerID="fcff5d0ddcbefca0ce9e1379c01065d38ff4d96407cf269b7ffd530f8012bc38" exitCode=0 Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.585014 4858 generic.go:334] "Generic (PLEG): container finished" podID="a374cd19-18a6-4859-988f-3150a915ef2a" containerID="9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165" exitCode=0 Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.585093 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.596542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a374cd19-18a6-4859-988f-3150a915ef2a" (UID: "a374cd19-18a6-4859-988f-3150a915ef2a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.597301 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a374cd19-18a6-4859-988f-3150a915ef2a" (UID: "a374cd19-18a6-4859-988f-3150a915ef2a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.598193 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a374cd19-18a6-4859-988f-3150a915ef2a" (UID: "a374cd19-18a6-4859-988f-3150a915ef2a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.602047 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-config" (OuterVolumeSpecName: "config") pod "a374cd19-18a6-4859-988f-3150a915ef2a" (UID: "a374cd19-18a6-4859-988f-3150a915ef2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.654615 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.654645 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.654656 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.654691 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-945s6\" (UniqueName: \"kubernetes.io/projected/a374cd19-18a6-4859-988f-3150a915ef2a-kube-api-access-945s6\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.654702 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a374cd19-18a6-4859-988f-3150a915ef2a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.658217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gmjl5" event={"ID":"18753f8d-6d51-430b-aa62-f9ee41cf917c","Type":"ContainerDied","Data":"fcff5d0ddcbefca0ce9e1379c01065d38ff4d96407cf269b7ffd530f8012bc38"} Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.658291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" event={"ID":"a374cd19-18a6-4859-988f-3150a915ef2a","Type":"ContainerDied","Data":"9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165"} Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.658312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbd96959f-cfv26" event={"ID":"a374cd19-18a6-4859-988f-3150a915ef2a","Type":"ContainerDied","Data":"79ffd20f6568f1439b2922184ca80fb51f5907cddd03b8c84cb6ab8d28f3a9de"} Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.658350 4858 scope.go:117] "RemoveContainer" containerID="9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.687418 4858 scope.go:117] "RemoveContainer" containerID="560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.703051 4858 scope.go:117] "RemoveContainer" containerID="9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165" Nov 22 09:19:57 crc kubenswrapper[4858]: E1122 09:19:57.703576 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165\": container with ID starting with 9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165 not found: ID does not exist" containerID="9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.703709 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165"} err="failed to get container status \"9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165\": rpc error: code = NotFound desc = could not find container \"9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165\": container with ID starting with 9a81a0a07e79290054a6f7a675769fbc840761e5d66634ce0276dcd5b7993165 not found: ID does not exist" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.703783 4858 scope.go:117] "RemoveContainer" containerID="560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770" Nov 22 09:19:57 crc kubenswrapper[4858]: E1122 09:19:57.704151 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770\": container with ID starting with 560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770 not found: ID does not exist" containerID="560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.704184 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770"} err="failed to get container status \"560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770\": rpc error: code = NotFound desc = could not find container \"560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770\": container with ID starting with 560b6f31fa1977dedae989592cd960200acef26944d70ac3e63ad7052ffee770 not found: ID does not exist" Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.917791 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbd96959f-cfv26"] Nov 22 09:19:57 crc kubenswrapper[4858]: I1122 09:19:57.923575 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fbd96959f-cfv26"] Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.893998 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.975939 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-credential-keys\") pod \"18753f8d-6d51-430b-aa62-f9ee41cf917c\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.976072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/18753f8d-6d51-430b-aa62-f9ee41cf917c-kube-api-access-q8dqt\") pod \"18753f8d-6d51-430b-aa62-f9ee41cf917c\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.976121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-combined-ca-bundle\") pod \"18753f8d-6d51-430b-aa62-f9ee41cf917c\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.976181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-fernet-keys\") pod \"18753f8d-6d51-430b-aa62-f9ee41cf917c\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.976199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-scripts\") pod \"18753f8d-6d51-430b-aa62-f9ee41cf917c\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.976214 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-config-data\") pod \"18753f8d-6d51-430b-aa62-f9ee41cf917c\" (UID: \"18753f8d-6d51-430b-aa62-f9ee41cf917c\") " Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.981661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "18753f8d-6d51-430b-aa62-f9ee41cf917c" (UID: "18753f8d-6d51-430b-aa62-f9ee41cf917c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.981689 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-scripts" (OuterVolumeSpecName: "scripts") pod "18753f8d-6d51-430b-aa62-f9ee41cf917c" (UID: "18753f8d-6d51-430b-aa62-f9ee41cf917c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.981720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "18753f8d-6d51-430b-aa62-f9ee41cf917c" (UID: "18753f8d-6d51-430b-aa62-f9ee41cf917c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.981745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18753f8d-6d51-430b-aa62-f9ee41cf917c-kube-api-access-q8dqt" (OuterVolumeSpecName: "kube-api-access-q8dqt") pod "18753f8d-6d51-430b-aa62-f9ee41cf917c" (UID: "18753f8d-6d51-430b-aa62-f9ee41cf917c"). InnerVolumeSpecName "kube-api-access-q8dqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.998738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18753f8d-6d51-430b-aa62-f9ee41cf917c" (UID: "18753f8d-6d51-430b-aa62-f9ee41cf917c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:58 crc kubenswrapper[4858]: I1122 09:19:58.999333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-config-data" (OuterVolumeSpecName: "config-data") pod "18753f8d-6d51-430b-aa62-f9ee41cf917c" (UID: "18753f8d-6d51-430b-aa62-f9ee41cf917c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.078813 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.078863 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.078878 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.078890 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.078905 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18753f8d-6d51-430b-aa62-f9ee41cf917c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.078917 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8dqt\" (UniqueName: \"kubernetes.io/projected/18753f8d-6d51-430b-aa62-f9ee41cf917c-kube-api-access-q8dqt\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.547483 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" path="/var/lib/kubelet/pods/a374cd19-18a6-4859-988f-3150a915ef2a/volumes" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.604555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gmjl5" event={"ID":"18753f8d-6d51-430b-aa62-f9ee41cf917c","Type":"ContainerDied","Data":"0592f3511c7fa1b610dc6b0107171a931b9927e202a2708f3936978390b3301c"} Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.604610 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0592f3511c7fa1b610dc6b0107171a931b9927e202a2708f3936978390b3301c" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.604719 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gmjl5" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.745953 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7567c6b846-s845h"] Nov 22 09:19:59 crc kubenswrapper[4858]: E1122 09:19:59.746282 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" containerName="dnsmasq-dns" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.746299 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" containerName="dnsmasq-dns" Nov 22 09:19:59 crc kubenswrapper[4858]: E1122 09:19:59.746336 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18753f8d-6d51-430b-aa62-f9ee41cf917c" containerName="keystone-bootstrap" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.746344 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="18753f8d-6d51-430b-aa62-f9ee41cf917c" containerName="keystone-bootstrap" Nov 22 09:19:59 crc kubenswrapper[4858]: E1122 09:19:59.746365 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" containerName="init" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.746373 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" containerName="init" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.746531 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="18753f8d-6d51-430b-aa62-f9ee41cf917c" containerName="keystone-bootstrap" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.746543 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a374cd19-18a6-4859-988f-3150a915ef2a" containerName="dnsmasq-dns" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.747423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.750536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nhhkf" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.750940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.751299 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.751562 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.757718 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7567c6b846-s845h"] Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.757902 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.769760 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.894657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-combined-ca-bundle\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.894794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clfwk\" (UniqueName: \"kubernetes.io/projected/1354cd0c-52c3-4174-b012-21a2b5ea8324-kube-api-access-clfwk\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.894953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-config-data\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.894980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-internal-tls-certs\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.895005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-fernet-keys\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.895047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-scripts\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.895117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-public-tls-certs\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:19:59 crc kubenswrapper[4858]: I1122 09:19:59.895205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-credential-keys\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.001664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-scripts\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.001750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-public-tls-certs\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.001777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-credential-keys\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.001813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-combined-ca-bundle\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.001909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clfwk\" (UniqueName: \"kubernetes.io/projected/1354cd0c-52c3-4174-b012-21a2b5ea8324-kube-api-access-clfwk\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.002119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-config-data\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.002140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-internal-tls-certs\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.002161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-fernet-keys\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.005991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-scripts\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.007496 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-credential-keys\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.007796 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-config-data\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.007834 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-combined-ca-bundle\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.026596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-internal-tls-certs\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.026610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-public-tls-certs\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.027267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-fernet-keys\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.030970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clfwk\" (UniqueName: \"kubernetes.io/projected/1354cd0c-52c3-4174-b012-21a2b5ea8324-kube-api-access-clfwk\") pod \"keystone-7567c6b846-s845h\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.068965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.503458 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7567c6b846-s845h"] Nov 22 09:20:00 crc kubenswrapper[4858]: I1122 09:20:00.612297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7567c6b846-s845h" event={"ID":"1354cd0c-52c3-4174-b012-21a2b5ea8324","Type":"ContainerStarted","Data":"2dcbcd5517ab0f48c19cd12a19a6a3e26f9a3af4196ebebe3ed9a963a4979fbd"} Nov 22 09:20:01 crc kubenswrapper[4858]: I1122 09:20:01.623001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7567c6b846-s845h" event={"ID":"1354cd0c-52c3-4174-b012-21a2b5ea8324","Type":"ContainerStarted","Data":"f098abc2e40e7e1a013de3bcdfe604e5a7ae91217777b7915ebd28ba5482db6d"} Nov 22 09:20:01 crc kubenswrapper[4858]: I1122 09:20:01.624207 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:01 crc kubenswrapper[4858]: I1122 09:20:01.637710 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7567c6b846-s845h" podStartSLOduration=2.637688813 podStartE2EDuration="2.637688813s" podCreationTimestamp="2025-11-22 09:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:20:01.636887147 +0000 UTC m=+7763.478310173" watchObservedRunningTime="2025-11-22 09:20:01.637688813 +0000 UTC m=+7763.479111819" Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.312419 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.312977 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.313076 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.313881 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42f3d23d35406b2d38363bc66b651f22fd81645127e429253baa3074251843ed"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.313940 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://42f3d23d35406b2d38363bc66b651f22fd81645127e429253baa3074251843ed" gracePeriod=600 Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.727094 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="42f3d23d35406b2d38363bc66b651f22fd81645127e429253baa3074251843ed" exitCode=0 Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.727188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"42f3d23d35406b2d38363bc66b651f22fd81645127e429253baa3074251843ed"} Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.727486 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417"} Nov 22 09:20:15 crc kubenswrapper[4858]: I1122 09:20:15.727507 4858 scope.go:117] "RemoveContainer" containerID="79020d57f297d37502f9e7e9eaabc33ca97474ae4e2bbbe474c4130c2f30f32f" Nov 22 09:20:31 crc kubenswrapper[4858]: I1122 09:20:31.635440 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.036744 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.039203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.044931 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-t4dcz" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.045135 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.045383 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.046250 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.150812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-combined-ca-bundle\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.150888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/219f1525-1b78-413c-a590-76f21b7df852-openstack-config\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.150919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78f7\" (UniqueName: \"kubernetes.io/projected/219f1525-1b78-413c-a590-76f21b7df852-kube-api-access-b78f7\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.150987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-openstack-config-secret\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.252844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-combined-ca-bundle\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.253352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/219f1525-1b78-413c-a590-76f21b7df852-openstack-config\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.253543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78f7\" (UniqueName: \"kubernetes.io/projected/219f1525-1b78-413c-a590-76f21b7df852-kube-api-access-b78f7\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.253742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-openstack-config-secret\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.254542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/219f1525-1b78-413c-a590-76f21b7df852-openstack-config\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.265979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-openstack-config-secret\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.266127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-combined-ca-bundle\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.269146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78f7\" (UniqueName: \"kubernetes.io/projected/219f1525-1b78-413c-a590-76f21b7df852-kube-api-access-b78f7\") pod \"openstackclient\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.368120 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.774834 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:20:36 crc kubenswrapper[4858]: W1122 09:20:36.775503 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod219f1525_1b78_413c_a590_76f21b7df852.slice/crio-ac5403c756131b413aabe96e27c2a5986d11e3b0c28049b237af9f0be429b00d WatchSource:0}: Error finding container ac5403c756131b413aabe96e27c2a5986d11e3b0c28049b237af9f0be429b00d: Status 404 returned error can't find the container with id ac5403c756131b413aabe96e27c2a5986d11e3b0c28049b237af9f0be429b00d Nov 22 09:20:36 crc kubenswrapper[4858]: I1122 09:20:36.908754 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"219f1525-1b78-413c-a590-76f21b7df852","Type":"ContainerStarted","Data":"ac5403c756131b413aabe96e27c2a5986d11e3b0c28049b237af9f0be429b00d"} Nov 22 09:20:55 crc kubenswrapper[4858]: I1122 09:20:55.079735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"219f1525-1b78-413c-a590-76f21b7df852","Type":"ContainerStarted","Data":"d549362dfc90b0a50c5ba9a47f8c3e2a35a35e0f363ecfdba956cba81768d510"} Nov 22 09:20:55 crc kubenswrapper[4858]: I1122 09:20:55.100191 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.004784979 podStartE2EDuration="19.100171896s" podCreationTimestamp="2025-11-22 09:20:36 +0000 UTC" firstStartedPulling="2025-11-22 09:20:36.777799292 +0000 UTC m=+7798.619222298" lastFinishedPulling="2025-11-22 09:20:53.873186209 +0000 UTC m=+7815.714609215" observedRunningTime="2025-11-22 09:20:55.093402699 +0000 UTC m=+7816.934825735" watchObservedRunningTime="2025-11-22 09:20:55.100171896 +0000 UTC m=+7816.941594902" Nov 22 09:21:15 crc kubenswrapper[4858]: I1122 09:21:15.873672 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" podUID="23bfa545-d340-4a3f-afeb-8e292096cb33" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:21:15 crc kubenswrapper[4858]: I1122 09:21:15.873737 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-w289j" podUID="23bfa545-d340-4a3f-afeb-8e292096cb33" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.311819 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.312981 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.849062 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jk4xd"] Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.850414 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.870095 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-04c4-account-create-kn5lf"] Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.874285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.877977 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.894207 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jk4xd"] Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.902279 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-04c4-account-create-kn5lf"] Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.971312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdg5\" (UniqueName: \"kubernetes.io/projected/76e6d022-50db-4869-ae72-e3a3b392654c-kube-api-access-bwdg5\") pod \"barbican-db-create-jk4xd\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:15 crc kubenswrapper[4858]: I1122 09:22:15.971436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e6d022-50db-4869-ae72-e3a3b392654c-operator-scripts\") pod \"barbican-db-create-jk4xd\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.073511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwdg5\" (UniqueName: \"kubernetes.io/projected/76e6d022-50db-4869-ae72-e3a3b392654c-kube-api-access-bwdg5\") pod \"barbican-db-create-jk4xd\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.073578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2jd\" (UniqueName: \"kubernetes.io/projected/1031cca4-1d3b-4294-98fa-88f044db7bcb-kube-api-access-wd2jd\") pod \"barbican-04c4-account-create-kn5lf\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.073650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e6d022-50db-4869-ae72-e3a3b392654c-operator-scripts\") pod \"barbican-db-create-jk4xd\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.073686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1031cca4-1d3b-4294-98fa-88f044db7bcb-operator-scripts\") pod \"barbican-04c4-account-create-kn5lf\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.074780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e6d022-50db-4869-ae72-e3a3b392654c-operator-scripts\") pod \"barbican-db-create-jk4xd\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.092259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwdg5\" (UniqueName: \"kubernetes.io/projected/76e6d022-50db-4869-ae72-e3a3b392654c-kube-api-access-bwdg5\") pod \"barbican-db-create-jk4xd\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.165525 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.174865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2jd\" (UniqueName: \"kubernetes.io/projected/1031cca4-1d3b-4294-98fa-88f044db7bcb-kube-api-access-wd2jd\") pod \"barbican-04c4-account-create-kn5lf\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.175023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1031cca4-1d3b-4294-98fa-88f044db7bcb-operator-scripts\") pod \"barbican-04c4-account-create-kn5lf\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.175954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1031cca4-1d3b-4294-98fa-88f044db7bcb-operator-scripts\") pod \"barbican-04c4-account-create-kn5lf\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.191643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2jd\" (UniqueName: \"kubernetes.io/projected/1031cca4-1d3b-4294-98fa-88f044db7bcb-kube-api-access-wd2jd\") pod \"barbican-04c4-account-create-kn5lf\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.201496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.632595 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jk4xd"] Nov 22 09:22:16 crc kubenswrapper[4858]: I1122 09:22:16.707551 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-04c4-account-create-kn5lf"] Nov 22 09:22:16 crc kubenswrapper[4858]: W1122 09:22:16.723736 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1031cca4_1d3b_4294_98fa_88f044db7bcb.slice/crio-86a46e1ee08adf0ff1bb1e17526c80f33037fa503daefc5d798bcec435766685 WatchSource:0}: Error finding container 86a46e1ee08adf0ff1bb1e17526c80f33037fa503daefc5d798bcec435766685: Status 404 returned error can't find the container with id 86a46e1ee08adf0ff1bb1e17526c80f33037fa503daefc5d798bcec435766685 Nov 22 09:22:17 crc kubenswrapper[4858]: I1122 09:22:17.470458 4858 generic.go:334] "Generic (PLEG): container finished" podID="76e6d022-50db-4869-ae72-e3a3b392654c" containerID="40ccf03462a8b5ea103a8d94847af5b050bcec5f047d293e64c5bb2b02000f3d" exitCode=0 Nov 22 09:22:17 crc kubenswrapper[4858]: I1122 09:22:17.470599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jk4xd" event={"ID":"76e6d022-50db-4869-ae72-e3a3b392654c","Type":"ContainerDied","Data":"40ccf03462a8b5ea103a8d94847af5b050bcec5f047d293e64c5bb2b02000f3d"} Nov 22 09:22:17 crc kubenswrapper[4858]: I1122 09:22:17.470942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jk4xd" event={"ID":"76e6d022-50db-4869-ae72-e3a3b392654c","Type":"ContainerStarted","Data":"536b151177e3654a65edf5f6cb6970f21b6b377754b5a859e22fff48f18e21ee"} Nov 22 09:22:17 crc kubenswrapper[4858]: I1122 09:22:17.472684 4858 generic.go:334] "Generic (PLEG): container finished" podID="1031cca4-1d3b-4294-98fa-88f044db7bcb" containerID="815846bf78109a07a1ed5511078ecc9e2d58555343c5df1832a12e9e1ef085a0" exitCode=0 Nov 22 09:22:17 crc kubenswrapper[4858]: I1122 09:22:17.472728 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-04c4-account-create-kn5lf" event={"ID":"1031cca4-1d3b-4294-98fa-88f044db7bcb","Type":"ContainerDied","Data":"815846bf78109a07a1ed5511078ecc9e2d58555343c5df1832a12e9e1ef085a0"} Nov 22 09:22:17 crc kubenswrapper[4858]: I1122 09:22:17.472757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-04c4-account-create-kn5lf" event={"ID":"1031cca4-1d3b-4294-98fa-88f044db7bcb","Type":"ContainerStarted","Data":"86a46e1ee08adf0ff1bb1e17526c80f33037fa503daefc5d798bcec435766685"} Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.864290 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.873853 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.922356 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1031cca4-1d3b-4294-98fa-88f044db7bcb-operator-scripts\") pod \"1031cca4-1d3b-4294-98fa-88f044db7bcb\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.922403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e6d022-50db-4869-ae72-e3a3b392654c-operator-scripts\") pod \"76e6d022-50db-4869-ae72-e3a3b392654c\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.922462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd2jd\" (UniqueName: \"kubernetes.io/projected/1031cca4-1d3b-4294-98fa-88f044db7bcb-kube-api-access-wd2jd\") pod \"1031cca4-1d3b-4294-98fa-88f044db7bcb\" (UID: \"1031cca4-1d3b-4294-98fa-88f044db7bcb\") " Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.922511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwdg5\" (UniqueName: \"kubernetes.io/projected/76e6d022-50db-4869-ae72-e3a3b392654c-kube-api-access-bwdg5\") pod \"76e6d022-50db-4869-ae72-e3a3b392654c\" (UID: \"76e6d022-50db-4869-ae72-e3a3b392654c\") " Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.923406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1031cca4-1d3b-4294-98fa-88f044db7bcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1031cca4-1d3b-4294-98fa-88f044db7bcb" (UID: "1031cca4-1d3b-4294-98fa-88f044db7bcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.923570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e6d022-50db-4869-ae72-e3a3b392654c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76e6d022-50db-4869-ae72-e3a3b392654c" (UID: "76e6d022-50db-4869-ae72-e3a3b392654c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.930116 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e6d022-50db-4869-ae72-e3a3b392654c-kube-api-access-bwdg5" (OuterVolumeSpecName: "kube-api-access-bwdg5") pod "76e6d022-50db-4869-ae72-e3a3b392654c" (UID: "76e6d022-50db-4869-ae72-e3a3b392654c"). InnerVolumeSpecName "kube-api-access-bwdg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:18 crc kubenswrapper[4858]: I1122 09:22:18.930696 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1031cca4-1d3b-4294-98fa-88f044db7bcb-kube-api-access-wd2jd" (OuterVolumeSpecName: "kube-api-access-wd2jd") pod "1031cca4-1d3b-4294-98fa-88f044db7bcb" (UID: "1031cca4-1d3b-4294-98fa-88f044db7bcb"). InnerVolumeSpecName "kube-api-access-wd2jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.023595 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd2jd\" (UniqueName: \"kubernetes.io/projected/1031cca4-1d3b-4294-98fa-88f044db7bcb-kube-api-access-wd2jd\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.024030 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwdg5\" (UniqueName: \"kubernetes.io/projected/76e6d022-50db-4869-ae72-e3a3b392654c-kube-api-access-bwdg5\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.024042 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1031cca4-1d3b-4294-98fa-88f044db7bcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.024050 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e6d022-50db-4869-ae72-e3a3b392654c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.496667 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk4xd" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.497062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jk4xd" event={"ID":"76e6d022-50db-4869-ae72-e3a3b392654c","Type":"ContainerDied","Data":"536b151177e3654a65edf5f6cb6970f21b6b377754b5a859e22fff48f18e21ee"} Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.497190 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536b151177e3654a65edf5f6cb6970f21b6b377754b5a859e22fff48f18e21ee" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.499209 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-04c4-account-create-kn5lf" event={"ID":"1031cca4-1d3b-4294-98fa-88f044db7bcb","Type":"ContainerDied","Data":"86a46e1ee08adf0ff1bb1e17526c80f33037fa503daefc5d798bcec435766685"} Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.499250 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-04c4-account-create-kn5lf" Nov 22 09:22:19 crc kubenswrapper[4858]: I1122 09:22:19.499268 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a46e1ee08adf0ff1bb1e17526c80f33037fa503daefc5d798bcec435766685" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.150137 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jwksx"] Nov 22 09:22:21 crc kubenswrapper[4858]: E1122 09:22:21.150502 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e6d022-50db-4869-ae72-e3a3b392654c" containerName="mariadb-database-create" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.150515 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e6d022-50db-4869-ae72-e3a3b392654c" containerName="mariadb-database-create" Nov 22 09:22:21 crc kubenswrapper[4858]: E1122 09:22:21.150531 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1031cca4-1d3b-4294-98fa-88f044db7bcb" containerName="mariadb-account-create" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.150537 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1031cca4-1d3b-4294-98fa-88f044db7bcb" containerName="mariadb-account-create" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.150690 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e6d022-50db-4869-ae72-e3a3b392654c" containerName="mariadb-database-create" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.150704 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1031cca4-1d3b-4294-98fa-88f044db7bcb" containerName="mariadb-account-create" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.151257 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.154634 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h4p5f" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.154911 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.160083 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jwksx"] Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.262809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-db-sync-config-data\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.263094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45x2q\" (UniqueName: \"kubernetes.io/projected/c54fead0-d92a-4f12-aed2-1266e9cc962b-kube-api-access-45x2q\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.263370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-combined-ca-bundle\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.365184 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-db-sync-config-data\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.365286 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45x2q\" (UniqueName: \"kubernetes.io/projected/c54fead0-d92a-4f12-aed2-1266e9cc962b-kube-api-access-45x2q\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.365427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-combined-ca-bundle\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.371476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-db-sync-config-data\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.371635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-combined-ca-bundle\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.386519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45x2q\" (UniqueName: \"kubernetes.io/projected/c54fead0-d92a-4f12-aed2-1266e9cc962b-kube-api-access-45x2q\") pod \"barbican-db-sync-jwksx\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.469494 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:21 crc kubenswrapper[4858]: I1122 09:22:21.941271 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jwksx"] Nov 22 09:22:22 crc kubenswrapper[4858]: I1122 09:22:22.528960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jwksx" event={"ID":"c54fead0-d92a-4f12-aed2-1266e9cc962b","Type":"ContainerStarted","Data":"5673bff8b7e7991b1e32664514085c44435f49e9df66fd031a268c313c1876df"} Nov 22 09:22:26 crc kubenswrapper[4858]: I1122 09:22:26.562393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jwksx" event={"ID":"c54fead0-d92a-4f12-aed2-1266e9cc962b","Type":"ContainerStarted","Data":"f9467c2155d4788c8502584e12f867bf1fc8d6a67ca589b9132124df9f592c10"} Nov 22 09:22:26 crc kubenswrapper[4858]: I1122 09:22:26.580203 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jwksx" podStartSLOduration=1.773035852 podStartE2EDuration="5.5801884s" podCreationTimestamp="2025-11-22 09:22:21 +0000 UTC" firstStartedPulling="2025-11-22 09:22:21.953828864 +0000 UTC m=+7903.795251890" lastFinishedPulling="2025-11-22 09:22:25.760981432 +0000 UTC m=+7907.602404438" observedRunningTime="2025-11-22 09:22:26.577755352 +0000 UTC m=+7908.419178368" watchObservedRunningTime="2025-11-22 09:22:26.5801884 +0000 UTC m=+7908.421611406" Nov 22 09:22:28 crc kubenswrapper[4858]: I1122 09:22:28.578461 4858 generic.go:334] "Generic (PLEG): container finished" podID="c54fead0-d92a-4f12-aed2-1266e9cc962b" containerID="f9467c2155d4788c8502584e12f867bf1fc8d6a67ca589b9132124df9f592c10" exitCode=0 Nov 22 09:22:28 crc kubenswrapper[4858]: I1122 09:22:28.578556 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jwksx" event={"ID":"c54fead0-d92a-4f12-aed2-1266e9cc962b","Type":"ContainerDied","Data":"f9467c2155d4788c8502584e12f867bf1fc8d6a67ca589b9132124df9f592c10"} Nov 22 09:22:28 crc kubenswrapper[4858]: E1122 09:22:28.603791 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc54fead0_d92a_4f12_aed2_1266e9cc962b.slice/crio-f9467c2155d4788c8502584e12f867bf1fc8d6a67ca589b9132124df9f592c10.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:22:29 crc kubenswrapper[4858]: I1122 09:22:29.922626 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.113649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-combined-ca-bundle\") pod \"c54fead0-d92a-4f12-aed2-1266e9cc962b\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.113990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45x2q\" (UniqueName: \"kubernetes.io/projected/c54fead0-d92a-4f12-aed2-1266e9cc962b-kube-api-access-45x2q\") pod \"c54fead0-d92a-4f12-aed2-1266e9cc962b\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.114119 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-db-sync-config-data\") pod \"c54fead0-d92a-4f12-aed2-1266e9cc962b\" (UID: \"c54fead0-d92a-4f12-aed2-1266e9cc962b\") " Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.120571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c54fead0-d92a-4f12-aed2-1266e9cc962b" (UID: "c54fead0-d92a-4f12-aed2-1266e9cc962b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.121445 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54fead0-d92a-4f12-aed2-1266e9cc962b-kube-api-access-45x2q" (OuterVolumeSpecName: "kube-api-access-45x2q") pod "c54fead0-d92a-4f12-aed2-1266e9cc962b" (UID: "c54fead0-d92a-4f12-aed2-1266e9cc962b"). InnerVolumeSpecName "kube-api-access-45x2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.146800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c54fead0-d92a-4f12-aed2-1266e9cc962b" (UID: "c54fead0-d92a-4f12-aed2-1266e9cc962b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.216098 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.216141 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54fead0-d92a-4f12-aed2-1266e9cc962b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.216154 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45x2q\" (UniqueName: \"kubernetes.io/projected/c54fead0-d92a-4f12-aed2-1266e9cc962b-kube-api-access-45x2q\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.596210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jwksx" event={"ID":"c54fead0-d92a-4f12-aed2-1266e9cc962b","Type":"ContainerDied","Data":"5673bff8b7e7991b1e32664514085c44435f49e9df66fd031a268c313c1876df"} Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.596251 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5673bff8b7e7991b1e32664514085c44435f49e9df66fd031a268c313c1876df" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.596281 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jwksx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.823600 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-575cc76dd7-swvhx"] Nov 22 09:22:30 crc kubenswrapper[4858]: E1122 09:22:30.823946 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54fead0-d92a-4f12-aed2-1266e9cc962b" containerName="barbican-db-sync" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.823962 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54fead0-d92a-4f12-aed2-1266e9cc962b" containerName="barbican-db-sync" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.824142 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54fead0-d92a-4f12-aed2-1266e9cc962b" containerName="barbican-db-sync" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.824980 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.830172 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.830607 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h4p5f" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.830628 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.843247 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-84d7f7895d-dzj8l"] Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.858074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.860631 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.862043 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-575cc76dd7-swvhx"] Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.888458 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84d7f7895d-dzj8l"] Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.927442 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c6d587c67-t2j5z"] Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.929401 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.931450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda2acef-1ebf-4106-b75f-57d3c2a80758-logs\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.933528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-combined-ca-bundle\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.933654 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dl56\" (UniqueName: \"kubernetes.io/projected/bda2acef-1ebf-4106-b75f-57d3c2a80758-kube-api-access-6dl56\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.933830 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.934046 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data-custom\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:30 crc kubenswrapper[4858]: I1122 09:22:30.949610 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c6d587c67-t2j5z"] Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.006710 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-647876995d-6fcn4"] Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.008072 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.012277 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.029795 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-647876995d-6fcn4"] Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcx8\" (UniqueName: \"kubernetes.io/projected/f8cfe71f-8556-40e0-b48b-dae7af5efc88-kube-api-access-qvcx8\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035195 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptzgw\" (UniqueName: \"kubernetes.io/projected/879cb25d-5d39-48df-ac21-505127e58fd1-kube-api-access-ptzgw\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-nb\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-sb\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data-custom\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035386 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-dns-svc\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879cb25d-5d39-48df-ac21-505127e58fd1-logs\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data-custom\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-config\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda2acef-1ebf-4106-b75f-57d3c2a80758-logs\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-combined-ca-bundle\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dl56\" (UniqueName: \"kubernetes.io/projected/bda2acef-1ebf-4106-b75f-57d3c2a80758-kube-api-access-6dl56\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-combined-ca-bundle\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.035663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.036853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda2acef-1ebf-4106-b75f-57d3c2a80758-logs\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.042191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data-custom\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.061245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.075458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dl56\" (UniqueName: \"kubernetes.io/projected/bda2acef-1ebf-4106-b75f-57d3c2a80758-kube-api-access-6dl56\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.081018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-combined-ca-bundle\") pod \"barbican-worker-575cc76dd7-swvhx\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-combined-ca-bundle\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136736 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-combined-ca-bundle\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8cl\" (UniqueName: \"kubernetes.io/projected/f7dab2e0-a086-4645-bcfa-827ea6896d11-kube-api-access-tt8cl\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcx8\" (UniqueName: \"kubernetes.io/projected/f8cfe71f-8556-40e0-b48b-dae7af5efc88-kube-api-access-qvcx8\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptzgw\" (UniqueName: \"kubernetes.io/projected/879cb25d-5d39-48df-ac21-505127e58fd1-kube-api-access-ptzgw\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-nb\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136854 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-sb\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136877 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136921 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-dns-svc\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879cb25d-5d39-48df-ac21-505127e58fd1-logs\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data-custom\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.136990 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dab2e0-a086-4645-bcfa-827ea6896d11-logs\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.137012 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-config\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.137062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data-custom\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.139102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-sb\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.139664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879cb25d-5d39-48df-ac21-505127e58fd1-logs\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.140373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-dns-svc\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.140702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-config\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.141689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-nb\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.153388 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.172345 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-combined-ca-bundle\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.172839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.172889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data-custom\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.240238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dab2e0-a086-4645-bcfa-827ea6896d11-logs\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.240329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data-custom\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.240401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-combined-ca-bundle\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.240451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt8cl\" (UniqueName: \"kubernetes.io/projected/f7dab2e0-a086-4645-bcfa-827ea6896d11-kube-api-access-tt8cl\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.240539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.246135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dab2e0-a086-4645-bcfa-827ea6896d11-logs\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.258511 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.260103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data-custom\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.264701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-combined-ca-bundle\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.268009 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptzgw\" (UniqueName: \"kubernetes.io/projected/879cb25d-5d39-48df-ac21-505127e58fd1-kube-api-access-ptzgw\") pod \"barbican-keystone-listener-84d7f7895d-dzj8l\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.278356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcx8\" (UniqueName: \"kubernetes.io/projected/f8cfe71f-8556-40e0-b48b-dae7af5efc88-kube-api-access-qvcx8\") pod \"dnsmasq-dns-5c6d587c67-t2j5z\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.286853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt8cl\" (UniqueName: \"kubernetes.io/projected/f7dab2e0-a086-4645-bcfa-827ea6896d11-kube-api-access-tt8cl\") pod \"barbican-api-647876995d-6fcn4\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.334823 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.496536 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.567528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.731370 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-575cc76dd7-swvhx"] Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.793871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-647876995d-6fcn4"] Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.860929 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c6d587c67-t2j5z"] Nov 22 09:22:31 crc kubenswrapper[4858]: I1122 09:22:31.951311 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84d7f7895d-dzj8l"] Nov 22 09:22:31 crc kubenswrapper[4858]: W1122 09:22:31.957336 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod879cb25d_5d39_48df_ac21_505127e58fd1.slice/crio-9d11dd0d0c260792034168f0178a2f5d52c9eb6de47e59d32c453ae1f0484a85 WatchSource:0}: Error finding container 9d11dd0d0c260792034168f0178a2f5d52c9eb6de47e59d32c453ae1f0484a85: Status 404 returned error can't find the container with id 9d11dd0d0c260792034168f0178a2f5d52c9eb6de47e59d32c453ae1f0484a85 Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.638479 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647876995d-6fcn4" event={"ID":"f7dab2e0-a086-4645-bcfa-827ea6896d11","Type":"ContainerStarted","Data":"9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.638827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647876995d-6fcn4" event={"ID":"f7dab2e0-a086-4645-bcfa-827ea6896d11","Type":"ContainerStarted","Data":"3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.638840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647876995d-6fcn4" event={"ID":"f7dab2e0-a086-4645-bcfa-827ea6896d11","Type":"ContainerStarted","Data":"aa70273ecb25928befade982600c0ebec843e8e5d03d6accc37e17e155e4b26c"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.638856 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.638868 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.647686 4858 generic.go:334] "Generic (PLEG): container finished" podID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerID="1a09c6820aefe7f92ba89aecbe89b535908162bbe95d10862a2e0fe019fa63ba" exitCode=0 Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.647835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" event={"ID":"f8cfe71f-8556-40e0-b48b-dae7af5efc88","Type":"ContainerDied","Data":"1a09c6820aefe7f92ba89aecbe89b535908162bbe95d10862a2e0fe019fa63ba"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.647889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" event={"ID":"f8cfe71f-8556-40e0-b48b-dae7af5efc88","Type":"ContainerStarted","Data":"a6becda7b4d3a7fb0d6fc059e366a9ed5942b78973868f2ca3300a856db4387a"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.649059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575cc76dd7-swvhx" event={"ID":"bda2acef-1ebf-4106-b75f-57d3c2a80758","Type":"ContainerStarted","Data":"9722c974109e8edf322a2589a05375dfe912699934b0c20542e092fb4a849ef2"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.650312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" event={"ID":"879cb25d-5d39-48df-ac21-505127e58fd1","Type":"ContainerStarted","Data":"9d11dd0d0c260792034168f0178a2f5d52c9eb6de47e59d32c453ae1f0484a85"} Nov 22 09:22:32 crc kubenswrapper[4858]: I1122 09:22:32.667673 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-647876995d-6fcn4" podStartSLOduration=2.667621873 podStartE2EDuration="2.667621873s" podCreationTimestamp="2025-11-22 09:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:22:32.656090574 +0000 UTC m=+7914.497513590" watchObservedRunningTime="2025-11-22 09:22:32.667621873 +0000 UTC m=+7914.509044899" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.319582 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-865798754b-wklbv"] Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.322142 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.323971 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.331695 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.345524 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-865798754b-wklbv"] Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data-custom\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390782 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-public-tls-certs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-internal-tls-certs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b76tj\" (UniqueName: \"kubernetes.io/projected/48b023fd-a47e-4fac-b75f-50e32cd8ed68-kube-api-access-b76tj\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-combined-ca-bundle\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.390995 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b023fd-a47e-4fac-b75f-50e32cd8ed68-logs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492022 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-public-tls-certs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-internal-tls-certs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b76tj\" (UniqueName: \"kubernetes.io/projected/48b023fd-a47e-4fac-b75f-50e32cd8ed68-kube-api-access-b76tj\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-combined-ca-bundle\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b023fd-a47e-4fac-b75f-50e32cd8ed68-logs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.492207 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data-custom\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.493357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b023fd-a47e-4fac-b75f-50e32cd8ed68-logs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.497476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-internal-tls-certs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.497544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data-custom\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.498376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-combined-ca-bundle\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.498500 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.498966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-public-tls-certs\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.515396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b76tj\" (UniqueName: \"kubernetes.io/projected/48b023fd-a47e-4fac-b75f-50e32cd8ed68-kube-api-access-b76tj\") pod \"barbican-api-865798754b-wklbv\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:33 crc kubenswrapper[4858]: I1122 09:22:33.642641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.118438 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-865798754b-wklbv"] Nov 22 09:22:34 crc kubenswrapper[4858]: W1122 09:22:34.123493 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48b023fd_a47e_4fac_b75f_50e32cd8ed68.slice/crio-af9ef32d29342d4f02496885c3bf267af0b98034402891f4e91554bd23ca7ead WatchSource:0}: Error finding container af9ef32d29342d4f02496885c3bf267af0b98034402891f4e91554bd23ca7ead: Status 404 returned error can't find the container with id af9ef32d29342d4f02496885c3bf267af0b98034402891f4e91554bd23ca7ead Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.685723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865798754b-wklbv" event={"ID":"48b023fd-a47e-4fac-b75f-50e32cd8ed68","Type":"ContainerStarted","Data":"9f5e64397fcfbf30b8e57de5cd79bbaa5aa1cfb6dc41d738673c9552face9f4f"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.685778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865798754b-wklbv" event={"ID":"48b023fd-a47e-4fac-b75f-50e32cd8ed68","Type":"ContainerStarted","Data":"761bd583458b9228a46e2048c9579370d0d1ec7104acbbde74a8d9d0c1f15d55"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.685791 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865798754b-wklbv" event={"ID":"48b023fd-a47e-4fac-b75f-50e32cd8ed68","Type":"ContainerStarted","Data":"af9ef32d29342d4f02496885c3bf267af0b98034402891f4e91554bd23ca7ead"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.686119 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.686158 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.688102 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" event={"ID":"879cb25d-5d39-48df-ac21-505127e58fd1","Type":"ContainerStarted","Data":"04b8eabacd40872b6a27353dabf534bacf39a98dba7ea7e75a7efb827a971e4a"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.688139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" event={"ID":"879cb25d-5d39-48df-ac21-505127e58fd1","Type":"ContainerStarted","Data":"55ca57bd132c43b406a7e2f78d44ccc4ccfef51b3c54f7deb21f3fcdf315f42d"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.691042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" event={"ID":"f8cfe71f-8556-40e0-b48b-dae7af5efc88","Type":"ContainerStarted","Data":"a5c5e9edcb5e3de31e1285faeff69b1a4b12b015a3ebea1d3911cea5bc782e7e"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.691461 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.693532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575cc76dd7-swvhx" event={"ID":"bda2acef-1ebf-4106-b75f-57d3c2a80758","Type":"ContainerStarted","Data":"cddb36142f710de01a2a2604912a1de51c98b16778d69d3541cb2e91fd0be10f"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.693562 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575cc76dd7-swvhx" event={"ID":"bda2acef-1ebf-4106-b75f-57d3c2a80758","Type":"ContainerStarted","Data":"452a9ab7b1b4a1974cdad0d365d5a8a6fa77348bb175f5268abb56ed7e86bf62"} Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.711510 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-865798754b-wklbv" podStartSLOduration=1.711490542 podStartE2EDuration="1.711490542s" podCreationTimestamp="2025-11-22 09:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:22:34.708517127 +0000 UTC m=+7916.549940153" watchObservedRunningTime="2025-11-22 09:22:34.711490542 +0000 UTC m=+7916.552913558" Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.760752 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-575cc76dd7-swvhx" podStartSLOduration=3.187176021 podStartE2EDuration="4.760729549s" podCreationTimestamp="2025-11-22 09:22:30 +0000 UTC" firstStartedPulling="2025-11-22 09:22:31.740569705 +0000 UTC m=+7913.581992711" lastFinishedPulling="2025-11-22 09:22:33.314123233 +0000 UTC m=+7915.155546239" observedRunningTime="2025-11-22 09:22:34.737158304 +0000 UTC m=+7916.578581320" watchObservedRunningTime="2025-11-22 09:22:34.760729549 +0000 UTC m=+7916.602152555" Nov 22 09:22:34 crc kubenswrapper[4858]: I1122 09:22:34.764071 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" podStartSLOduration=3.39218782 podStartE2EDuration="4.764054664s" podCreationTimestamp="2025-11-22 09:22:30 +0000 UTC" firstStartedPulling="2025-11-22 09:22:31.962469166 +0000 UTC m=+7913.803892172" lastFinishedPulling="2025-11-22 09:22:33.33433601 +0000 UTC m=+7915.175759016" observedRunningTime="2025-11-22 09:22:34.757585548 +0000 UTC m=+7916.599008594" watchObservedRunningTime="2025-11-22 09:22:34.764054664 +0000 UTC m=+7916.605477670" Nov 22 09:22:37 crc kubenswrapper[4858]: I1122 09:22:37.983061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:38 crc kubenswrapper[4858]: I1122 09:22:38.021915 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" podStartSLOduration=8.021887824 podStartE2EDuration="8.021887824s" podCreationTimestamp="2025-11-22 09:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:22:34.79078838 +0000 UTC m=+7916.632211406" watchObservedRunningTime="2025-11-22 09:22:38.021887824 +0000 UTC m=+7919.863310860" Nov 22 09:22:39 crc kubenswrapper[4858]: I1122 09:22:39.405582 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:41 crc kubenswrapper[4858]: I1122 09:22:41.570568 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:22:41 crc kubenswrapper[4858]: I1122 09:22:41.663438 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc7844867-d74sf"] Nov 22 09:22:41 crc kubenswrapper[4858]: I1122 09:22:41.663699 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="dnsmasq-dns" containerID="cri-o://dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee" gracePeriod=10 Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.195363 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.266189 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzfdz\" (UniqueName: \"kubernetes.io/projected/253654e9-90fa-4cfd-ac60-8b67c1c1b419-kube-api-access-gzfdz\") pod \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.266398 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-sb\") pod \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.266438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-config\") pod \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.266521 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-dns-svc\") pod \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.266593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-nb\") pod \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\" (UID: \"253654e9-90fa-4cfd-ac60-8b67c1c1b419\") " Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.276453 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253654e9-90fa-4cfd-ac60-8b67c1c1b419-kube-api-access-gzfdz" (OuterVolumeSpecName: "kube-api-access-gzfdz") pod "253654e9-90fa-4cfd-ac60-8b67c1c1b419" (UID: "253654e9-90fa-4cfd-ac60-8b67c1c1b419"). InnerVolumeSpecName "kube-api-access-gzfdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.319486 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "253654e9-90fa-4cfd-ac60-8b67c1c1b419" (UID: "253654e9-90fa-4cfd-ac60-8b67c1c1b419"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.325762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-config" (OuterVolumeSpecName: "config") pod "253654e9-90fa-4cfd-ac60-8b67c1c1b419" (UID: "253654e9-90fa-4cfd-ac60-8b67c1c1b419"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.326803 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "253654e9-90fa-4cfd-ac60-8b67c1c1b419" (UID: "253654e9-90fa-4cfd-ac60-8b67c1c1b419"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.346674 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "253654e9-90fa-4cfd-ac60-8b67c1c1b419" (UID: "253654e9-90fa-4cfd-ac60-8b67c1c1b419"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.367516 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzfdz\" (UniqueName: \"kubernetes.io/projected/253654e9-90fa-4cfd-ac60-8b67c1c1b419-kube-api-access-gzfdz\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.367545 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.367554 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.367563 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.367572 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/253654e9-90fa-4cfd-ac60-8b67c1c1b419-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.787622 4858 generic.go:334] "Generic (PLEG): container finished" podID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerID="dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee" exitCode=0 Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.787672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" event={"ID":"253654e9-90fa-4cfd-ac60-8b67c1c1b419","Type":"ContainerDied","Data":"dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee"} Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.787703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" event={"ID":"253654e9-90fa-4cfd-ac60-8b67c1c1b419","Type":"ContainerDied","Data":"ac58979407b4f3623574241ce36762aef236be06b53c1fecd52a381035cbc644"} Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.787732 4858 scope.go:117] "RemoveContainer" containerID="dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.787876 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.841115 4858 scope.go:117] "RemoveContainer" containerID="ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.846619 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc7844867-d74sf"] Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.853261 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fc7844867-d74sf"] Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.873283 4858 scope.go:117] "RemoveContainer" containerID="dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee" Nov 22 09:22:42 crc kubenswrapper[4858]: E1122 09:22:42.874600 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee\": container with ID starting with dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee not found: ID does not exist" containerID="dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.874662 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee"} err="failed to get container status \"dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee\": rpc error: code = NotFound desc = could not find container \"dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee\": container with ID starting with dc2fd6550c0a0573fd25aca77684ef021799d0ce249470dea65f19e3987adaee not found: ID does not exist" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.874706 4858 scope.go:117] "RemoveContainer" containerID="ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a" Nov 22 09:22:42 crc kubenswrapper[4858]: E1122 09:22:42.875509 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a\": container with ID starting with ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a not found: ID does not exist" containerID="ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a" Nov 22 09:22:42 crc kubenswrapper[4858]: I1122 09:22:42.875533 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a"} err="failed to get container status \"ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a\": rpc error: code = NotFound desc = could not find container \"ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a\": container with ID starting with ae741caf9bd9e2bdbfd00ff3e5fdf006eb8568234f6dfed362d98c97e553a27a not found: ID does not exist" Nov 22 09:22:43 crc kubenswrapper[4858]: I1122 09:22:43.547616 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" path="/var/lib/kubelet/pods/253654e9-90fa-4cfd-ac60-8b67c1c1b419/volumes" Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.012173 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.024416 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.082641 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-647876995d-6fcn4"] Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.085138 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-647876995d-6fcn4" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api" containerID="cri-o://9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668" gracePeriod=30 Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.086229 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-647876995d-6fcn4" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api-log" containerID="cri-o://3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890" gracePeriod=30 Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.311868 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.312195 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.818336 4858 generic.go:334] "Generic (PLEG): container finished" podID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerID="3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890" exitCode=143 Nov 22 09:22:45 crc kubenswrapper[4858]: I1122 09:22:45.818748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647876995d-6fcn4" event={"ID":"f7dab2e0-a086-4645-bcfa-827ea6896d11","Type":"ContainerDied","Data":"3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890"} Nov 22 09:22:46 crc kubenswrapper[4858]: I1122 09:22:46.946417 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fc7844867-d74sf" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.40:5353: i/o timeout" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.253125 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-647876995d-6fcn4" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.51:9311/healthcheck\": read tcp 10.217.0.2:54042->10.217.1.51:9311: read: connection reset by peer" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.253131 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-647876995d-6fcn4" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.51:9311/healthcheck\": read tcp 10.217.0.2:54056->10.217.1.51:9311: read: connection reset by peer" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.595059 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.789029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt8cl\" (UniqueName: \"kubernetes.io/projected/f7dab2e0-a086-4645-bcfa-827ea6896d11-kube-api-access-tt8cl\") pod \"f7dab2e0-a086-4645-bcfa-827ea6896d11\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.789105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data-custom\") pod \"f7dab2e0-a086-4645-bcfa-827ea6896d11\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.789226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-combined-ca-bundle\") pod \"f7dab2e0-a086-4645-bcfa-827ea6896d11\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.789248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dab2e0-a086-4645-bcfa-827ea6896d11-logs\") pod \"f7dab2e0-a086-4645-bcfa-827ea6896d11\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.789265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data\") pod \"f7dab2e0-a086-4645-bcfa-827ea6896d11\" (UID: \"f7dab2e0-a086-4645-bcfa-827ea6896d11\") " Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.791460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7dab2e0-a086-4645-bcfa-827ea6896d11-logs" (OuterVolumeSpecName: "logs") pod "f7dab2e0-a086-4645-bcfa-827ea6896d11" (UID: "f7dab2e0-a086-4645-bcfa-827ea6896d11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.797764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7dab2e0-a086-4645-bcfa-827ea6896d11-kube-api-access-tt8cl" (OuterVolumeSpecName: "kube-api-access-tt8cl") pod "f7dab2e0-a086-4645-bcfa-827ea6896d11" (UID: "f7dab2e0-a086-4645-bcfa-827ea6896d11"). InnerVolumeSpecName "kube-api-access-tt8cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.798797 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f7dab2e0-a086-4645-bcfa-827ea6896d11" (UID: "f7dab2e0-a086-4645-bcfa-827ea6896d11"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.837492 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7dab2e0-a086-4645-bcfa-827ea6896d11" (UID: "f7dab2e0-a086-4645-bcfa-827ea6896d11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.859032 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data" (OuterVolumeSpecName: "config-data") pod "f7dab2e0-a086-4645-bcfa-827ea6896d11" (UID: "f7dab2e0-a086-4645-bcfa-827ea6896d11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.863719 4858 generic.go:334] "Generic (PLEG): container finished" podID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerID="9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668" exitCode=0 Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.863782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647876995d-6fcn4" event={"ID":"f7dab2e0-a086-4645-bcfa-827ea6896d11","Type":"ContainerDied","Data":"9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668"} Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.863813 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-647876995d-6fcn4" event={"ID":"f7dab2e0-a086-4645-bcfa-827ea6896d11","Type":"ContainerDied","Data":"aa70273ecb25928befade982600c0ebec843e8e5d03d6accc37e17e155e4b26c"} Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.863856 4858 scope.go:117] "RemoveContainer" containerID="9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.864035 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-647876995d-6fcn4" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.894244 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.894269 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dab2e0-a086-4645-bcfa-827ea6896d11-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.895236 4858 scope.go:117] "RemoveContainer" containerID="3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.894279 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.896590 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt8cl\" (UniqueName: \"kubernetes.io/projected/f7dab2e0-a086-4645-bcfa-827ea6896d11-kube-api-access-tt8cl\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.896652 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7dab2e0-a086-4645-bcfa-827ea6896d11-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.903718 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-647876995d-6fcn4"] Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.909443 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-647876995d-6fcn4"] Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.911145 4858 scope.go:117] "RemoveContainer" containerID="9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668" Nov 22 09:22:48 crc kubenswrapper[4858]: E1122 09:22:48.911497 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668\": container with ID starting with 9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668 not found: ID does not exist" containerID="9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.911531 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668"} err="failed to get container status \"9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668\": rpc error: code = NotFound desc = could not find container \"9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668\": container with ID starting with 9f76ec09d9521b22a71cc6558e464c583aa87504d16a7db947d0e3c69fc7b668 not found: ID does not exist" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.911552 4858 scope.go:117] "RemoveContainer" containerID="3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890" Nov 22 09:22:48 crc kubenswrapper[4858]: E1122 09:22:48.911860 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890\": container with ID starting with 3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890 not found: ID does not exist" containerID="3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890" Nov 22 09:22:48 crc kubenswrapper[4858]: I1122 09:22:48.911892 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890"} err="failed to get container status \"3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890\": rpc error: code = NotFound desc = could not find container \"3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890\": container with ID starting with 3f58795d15a839a178e506bb9107a62ae5d1f7077819ab3af0a86c37ca19b890 not found: ID does not exist" Nov 22 09:22:49 crc kubenswrapper[4858]: E1122 09:22:49.066543 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7dab2e0_a086_4645_bcfa_827ea6896d11.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7dab2e0_a086_4645_bcfa_827ea6896d11.slice/crio-aa70273ecb25928befade982600c0ebec843e8e5d03d6accc37e17e155e4b26c\": RecentStats: unable to find data in memory cache]" Nov 22 09:22:49 crc kubenswrapper[4858]: I1122 09:22:49.552213 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" path="/var/lib/kubelet/pods/f7dab2e0-a086-4645-bcfa-827ea6896d11/volumes" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.621041 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6srlp"] Nov 22 09:22:56 crc kubenswrapper[4858]: E1122 09:22:56.622252 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622267 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api" Nov 22 09:22:56 crc kubenswrapper[4858]: E1122 09:22:56.622292 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="dnsmasq-dns" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622300 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="dnsmasq-dns" Nov 22 09:22:56 crc kubenswrapper[4858]: E1122 09:22:56.622336 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api-log" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622344 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api-log" Nov 22 09:22:56 crc kubenswrapper[4858]: E1122 09:22:56.622386 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="init" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622393 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="init" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622767 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api-log" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622789 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="253654e9-90fa-4cfd-ac60-8b67c1c1b419" containerName="dnsmasq-dns" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.622800 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7dab2e0-a086-4645-bcfa-827ea6896d11" containerName="barbican-api" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.623851 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.652987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6srlp"] Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.727640 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b93-account-create-t58xd"] Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.728939 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.731232 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.743714 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b93-account-create-t58xd"] Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.745799 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxvz\" (UniqueName: \"kubernetes.io/projected/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-kube-api-access-fsxvz\") pod \"neutron-db-create-6srlp\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.745856 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-operator-scripts\") pod \"neutron-db-create-6srlp\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.847743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857t8\" (UniqueName: \"kubernetes.io/projected/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-kube-api-access-857t8\") pod \"neutron-6b93-account-create-t58xd\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.847825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxvz\" (UniqueName: \"kubernetes.io/projected/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-kube-api-access-fsxvz\") pod \"neutron-db-create-6srlp\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.847876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-operator-scripts\") pod \"neutron-db-create-6srlp\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.847934 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-operator-scripts\") pod \"neutron-6b93-account-create-t58xd\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.849017 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-operator-scripts\") pod \"neutron-db-create-6srlp\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.866682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxvz\" (UniqueName: \"kubernetes.io/projected/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-kube-api-access-fsxvz\") pod \"neutron-db-create-6srlp\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.949348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-operator-scripts\") pod \"neutron-6b93-account-create-t58xd\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.949664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-857t8\" (UniqueName: \"kubernetes.io/projected/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-kube-api-access-857t8\") pod \"neutron-6b93-account-create-t58xd\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.950180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-operator-scripts\") pod \"neutron-6b93-account-create-t58xd\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.965774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:56 crc kubenswrapper[4858]: I1122 09:22:56.968019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-857t8\" (UniqueName: \"kubernetes.io/projected/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-kube-api-access-857t8\") pod \"neutron-6b93-account-create-t58xd\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.047956 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:57 crc kubenswrapper[4858]: W1122 09:22:57.397123 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37256fab_1ed7_4d0e_92f1_eead13a7c3b6.slice/crio-b4fc095da1a28ecb3badb5c2874196e551c384955c1b1422c526ab81c10d0eb0 WatchSource:0}: Error finding container b4fc095da1a28ecb3badb5c2874196e551c384955c1b1422c526ab81c10d0eb0: Status 404 returned error can't find the container with id b4fc095da1a28ecb3badb5c2874196e551c384955c1b1422c526ab81c10d0eb0 Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.399445 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6srlp"] Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.528604 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b93-account-create-t58xd"] Nov 22 09:22:57 crc kubenswrapper[4858]: W1122 09:22:57.532064 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ec3f5a_a8ce_4b39_94ea_2f0cb16c9715.slice/crio-43c442e7f6a3e72eb6a890e4e345a30a78adabf41a4d2052df7de6c2c200ab53 WatchSource:0}: Error finding container 43c442e7f6a3e72eb6a890e4e345a30a78adabf41a4d2052df7de6c2c200ab53: Status 404 returned error can't find the container with id 43c442e7f6a3e72eb6a890e4e345a30a78adabf41a4d2052df7de6c2c200ab53 Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.946724 4858 generic.go:334] "Generic (PLEG): container finished" podID="37256fab-1ed7-4d0e-92f1-eead13a7c3b6" containerID="d6d7d09344eacdcf93e1263e234a50db949ca2ba8c11ba1deaaadd53c0577551" exitCode=0 Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.946861 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6srlp" event={"ID":"37256fab-1ed7-4d0e-92f1-eead13a7c3b6","Type":"ContainerDied","Data":"d6d7d09344eacdcf93e1263e234a50db949ca2ba8c11ba1deaaadd53c0577551"} Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.947248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6srlp" event={"ID":"37256fab-1ed7-4d0e-92f1-eead13a7c3b6","Type":"ContainerStarted","Data":"b4fc095da1a28ecb3badb5c2874196e551c384955c1b1422c526ab81c10d0eb0"} Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.950492 4858 generic.go:334] "Generic (PLEG): container finished" podID="29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" containerID="254bd3260a9ba1aee7f6ba007d9fdde379fd4e6f756722769fabf067b3e29d32" exitCode=0 Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.950643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b93-account-create-t58xd" event={"ID":"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715","Type":"ContainerDied","Data":"254bd3260a9ba1aee7f6ba007d9fdde379fd4e6f756722769fabf067b3e29d32"} Nov 22 09:22:57 crc kubenswrapper[4858]: I1122 09:22:57.950688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b93-account-create-t58xd" event={"ID":"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715","Type":"ContainerStarted","Data":"43c442e7f6a3e72eb6a890e4e345a30a78adabf41a4d2052df7de6c2c200ab53"} Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.338071 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.344453 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6srlp" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.392426 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857t8\" (UniqueName: \"kubernetes.io/projected/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-kube-api-access-857t8\") pod \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.392522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-operator-scripts\") pod \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\" (UID: \"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715\") " Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.392929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" (UID: "29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.393029 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.398521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-kube-api-access-857t8" (OuterVolumeSpecName: "kube-api-access-857t8") pod "29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" (UID: "29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715"). InnerVolumeSpecName "kube-api-access-857t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.494374 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxvz\" (UniqueName: \"kubernetes.io/projected/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-kube-api-access-fsxvz\") pod \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.494561 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-operator-scripts\") pod \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\" (UID: \"37256fab-1ed7-4d0e-92f1-eead13a7c3b6\") " Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.495024 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857t8\" (UniqueName: \"kubernetes.io/projected/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715-kube-api-access-857t8\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.495737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37256fab-1ed7-4d0e-92f1-eead13a7c3b6" (UID: "37256fab-1ed7-4d0e-92f1-eead13a7c3b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.498715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-kube-api-access-fsxvz" (OuterVolumeSpecName: "kube-api-access-fsxvz") pod "37256fab-1ed7-4d0e-92f1-eead13a7c3b6" (UID: "37256fab-1ed7-4d0e-92f1-eead13a7c3b6"). InnerVolumeSpecName "kube-api-access-fsxvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.597679 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsxvz\" (UniqueName: \"kubernetes.io/projected/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-kube-api-access-fsxvz\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.597814 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37256fab-1ed7-4d0e-92f1-eead13a7c3b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.974594 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b93-account-create-t58xd" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.974646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b93-account-create-t58xd" event={"ID":"29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715","Type":"ContainerDied","Data":"43c442e7f6a3e72eb6a890e4e345a30a78adabf41a4d2052df7de6c2c200ab53"} Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.975692 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c442e7f6a3e72eb6a890e4e345a30a78adabf41a4d2052df7de6c2c200ab53" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.980487 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6srlp" event={"ID":"37256fab-1ed7-4d0e-92f1-eead13a7c3b6","Type":"ContainerDied","Data":"b4fc095da1a28ecb3badb5c2874196e551c384955c1b1422c526ab81c10d0eb0"} Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.980567 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fc095da1a28ecb3badb5c2874196e551c384955c1b1422c526ab81c10d0eb0" Nov 22 09:22:59 crc kubenswrapper[4858]: I1122 09:22:59.980646 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6srlp" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.952510 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gvpps"] Nov 22 09:23:01 crc kubenswrapper[4858]: E1122 09:23:01.953001 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37256fab-1ed7-4d0e-92f1-eead13a7c3b6" containerName="mariadb-database-create" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.953018 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="37256fab-1ed7-4d0e-92f1-eead13a7c3b6" containerName="mariadb-database-create" Nov 22 09:23:01 crc kubenswrapper[4858]: E1122 09:23:01.953041 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" containerName="mariadb-account-create" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.953048 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" containerName="mariadb-account-create" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.953257 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" containerName="mariadb-account-create" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.953278 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="37256fab-1ed7-4d0e-92f1-eead13a7c3b6" containerName="mariadb-database-create" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.954093 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.957034 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mtnpb" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.957160 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.959928 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 09:23:01 crc kubenswrapper[4858]: I1122 09:23:01.963642 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gvpps"] Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.045255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-config\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.045421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44rt\" (UniqueName: \"kubernetes.io/projected/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-kube-api-access-x44rt\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.045519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-combined-ca-bundle\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.147280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-config\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.147389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44rt\" (UniqueName: \"kubernetes.io/projected/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-kube-api-access-x44rt\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.147441 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-combined-ca-bundle\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.152837 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-combined-ca-bundle\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.153517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-config\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.165790 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44rt\" (UniqueName: \"kubernetes.io/projected/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-kube-api-access-x44rt\") pod \"neutron-db-sync-gvpps\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.286423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:02 crc kubenswrapper[4858]: I1122 09:23:02.714782 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gvpps"] Nov 22 09:23:02 crc kubenswrapper[4858]: W1122 09:23:02.720982 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e9a1d9_1aa6_4b59_82b5_bba9b099c94d.slice/crio-8300fba9da20204dd9197f35ba4fabcdb4e376a9db8e9f2f965aa990cff6a916 WatchSource:0}: Error finding container 8300fba9da20204dd9197f35ba4fabcdb4e376a9db8e9f2f965aa990cff6a916: Status 404 returned error can't find the container with id 8300fba9da20204dd9197f35ba4fabcdb4e376a9db8e9f2f965aa990cff6a916 Nov 22 09:23:03 crc kubenswrapper[4858]: I1122 09:23:03.007621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gvpps" event={"ID":"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d","Type":"ContainerStarted","Data":"30f4bdd049593318e80955a4b1cbd854fd71290ca3ed692dae22858c31f0db9f"} Nov 22 09:23:03 crc kubenswrapper[4858]: I1122 09:23:03.007680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gvpps" event={"ID":"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d","Type":"ContainerStarted","Data":"8300fba9da20204dd9197f35ba4fabcdb4e376a9db8e9f2f965aa990cff6a916"} Nov 22 09:23:03 crc kubenswrapper[4858]: I1122 09:23:03.027566 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gvpps" podStartSLOduration=2.02754177 podStartE2EDuration="2.02754177s" podCreationTimestamp="2025-11-22 09:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:23:03.024487172 +0000 UTC m=+7944.865910188" watchObservedRunningTime="2025-11-22 09:23:03.02754177 +0000 UTC m=+7944.868964776" Nov 22 09:23:07 crc kubenswrapper[4858]: I1122 09:23:07.048613 4858 generic.go:334] "Generic (PLEG): container finished" podID="e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" containerID="30f4bdd049593318e80955a4b1cbd854fd71290ca3ed692dae22858c31f0db9f" exitCode=0 Nov 22 09:23:07 crc kubenswrapper[4858]: I1122 09:23:07.048728 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gvpps" event={"ID":"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d","Type":"ContainerDied","Data":"30f4bdd049593318e80955a4b1cbd854fd71290ca3ed692dae22858c31f0db9f"} Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.386030 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.462445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-config\") pod \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.462767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-combined-ca-bundle\") pod \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.462858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x44rt\" (UniqueName: \"kubernetes.io/projected/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-kube-api-access-x44rt\") pod \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\" (UID: \"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d\") " Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.468252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-kube-api-access-x44rt" (OuterVolumeSpecName: "kube-api-access-x44rt") pod "e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" (UID: "e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d"). InnerVolumeSpecName "kube-api-access-x44rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.487220 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" (UID: "e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.494204 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-config" (OuterVolumeSpecName: "config") pod "e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" (UID: "e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.565164 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.565691 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x44rt\" (UniqueName: \"kubernetes.io/projected/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-kube-api-access-x44rt\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:08 crc kubenswrapper[4858]: I1122 09:23:08.565705 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.066467 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gvpps" event={"ID":"e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d","Type":"ContainerDied","Data":"8300fba9da20204dd9197f35ba4fabcdb4e376a9db8e9f2f965aa990cff6a916"} Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.066516 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8300fba9da20204dd9197f35ba4fabcdb4e376a9db8e9f2f965aa990cff6a916" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.066558 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gvpps" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.294224 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f47b9c95-b9nkm"] Nov 22 09:23:09 crc kubenswrapper[4858]: E1122 09:23:09.295197 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" containerName="neutron-db-sync" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.295260 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" containerName="neutron-db-sync" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.297585 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" containerName="neutron-db-sync" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.298688 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.355859 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f47b9c95-b9nkm"] Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.379768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6fm\" (UniqueName: \"kubernetes.io/projected/48f25b67-d57e-476a-ae32-bed8363c1865-kube-api-access-4r6fm\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.380180 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-config\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.380281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-sb\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.380379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-nb\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.380464 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-dns-svc\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.458021 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-658f9d84-hnspq"] Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.462686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.466228 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mtnpb" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.466413 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.466514 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.466522 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.469807 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-658f9d84-hnspq"] Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.483873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-config\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.483940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-sb\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.483989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-nb\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.484044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-dns-svc\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.484065 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6fm\" (UniqueName: \"kubernetes.io/projected/48f25b67-d57e-476a-ae32-bed8363c1865-kube-api-access-4r6fm\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.485687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-config\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.486489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-sb\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.487015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-nb\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.488158 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-dns-svc\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.519503 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6fm\" (UniqueName: \"kubernetes.io/projected/48f25b67-d57e-476a-ae32-bed8363c1865-kube-api-access-4r6fm\") pod \"dnsmasq-dns-85f47b9c95-b9nkm\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.585262 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-combined-ca-bundle\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.585658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-httpd-config\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.585701 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-config\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.585728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwsg\" (UniqueName: \"kubernetes.io/projected/7f5a35c3-7712-473b-8eb2-b338189529b8-kube-api-access-xtwsg\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.585773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-ovndb-tls-certs\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.665616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.688406 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-combined-ca-bundle\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.689218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-httpd-config\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.689256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-config\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.689274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwsg\" (UniqueName: \"kubernetes.io/projected/7f5a35c3-7712-473b-8eb2-b338189529b8-kube-api-access-xtwsg\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.689328 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-ovndb-tls-certs\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.694450 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-ovndb-tls-certs\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.694908 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-config\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.694974 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-httpd-config\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.695550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-combined-ca-bundle\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.710962 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwsg\" (UniqueName: \"kubernetes.io/projected/7f5a35c3-7712-473b-8eb2-b338189529b8-kube-api-access-xtwsg\") pod \"neutron-658f9d84-hnspq\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:09 crc kubenswrapper[4858]: I1122 09:23:09.784089 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:10 crc kubenswrapper[4858]: I1122 09:23:10.105501 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f47b9c95-b9nkm"] Nov 22 09:23:10 crc kubenswrapper[4858]: I1122 09:23:10.379138 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-658f9d84-hnspq"] Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.097828 4858 generic.go:334] "Generic (PLEG): container finished" podID="48f25b67-d57e-476a-ae32-bed8363c1865" containerID="022ec0c7a87e6be2451dc8fdc102e78da1685ffde242cd65e49c9cfef3597917" exitCode=0 Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.097903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" event={"ID":"48f25b67-d57e-476a-ae32-bed8363c1865","Type":"ContainerDied","Data":"022ec0c7a87e6be2451dc8fdc102e78da1685ffde242cd65e49c9cfef3597917"} Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.097938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" event={"ID":"48f25b67-d57e-476a-ae32-bed8363c1865","Type":"ContainerStarted","Data":"3ff28e474dbec7b353b31e79a1c80d016a1b29c719a53a387eb73d1700378558"} Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.101584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658f9d84-hnspq" event={"ID":"7f5a35c3-7712-473b-8eb2-b338189529b8","Type":"ContainerStarted","Data":"ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2"} Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.101662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658f9d84-hnspq" event={"ID":"7f5a35c3-7712-473b-8eb2-b338189529b8","Type":"ContainerStarted","Data":"9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc"} Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.101682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658f9d84-hnspq" event={"ID":"7f5a35c3-7712-473b-8eb2-b338189529b8","Type":"ContainerStarted","Data":"2fdb06b13f04450f1288337db2ccba9131fbe40eac36d97dcfba7e2c6bbd1b5f"} Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.101898 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.156412 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-658f9d84-hnspq" podStartSLOduration=2.156395414 podStartE2EDuration="2.156395414s" podCreationTimestamp="2025-11-22 09:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:23:11.152607513 +0000 UTC m=+7952.994030529" watchObservedRunningTime="2025-11-22 09:23:11.156395414 +0000 UTC m=+7952.997818420" Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.946052 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7dc86c6f7-88xlp"] Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.947954 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.951165 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.954611 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 22 09:23:11 crc kubenswrapper[4858]: I1122 09:23:11.963792 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc86c6f7-88xlp"] Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-config\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-public-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-combined-ca-bundle\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-httpd-config\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-internal-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-ovndb-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.032594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxrx\" (UniqueName: \"kubernetes.io/projected/d38ef80a-bbad-4072-a37b-1e355a943447-kube-api-access-xsxrx\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.111135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" event={"ID":"48f25b67-d57e-476a-ae32-bed8363c1865","Type":"ContainerStarted","Data":"25729f4f19af985cdfef368bfbe77c283d73f25ed8077deb863daa91b736f94c"} Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.111421 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxrx\" (UniqueName: \"kubernetes.io/projected/d38ef80a-bbad-4072-a37b-1e355a943447-kube-api-access-xsxrx\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-config\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-public-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-combined-ca-bundle\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-httpd-config\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-internal-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.137486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-ovndb-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.150807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-internal-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.153200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-public-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.153413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-config\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.153470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-combined-ca-bundle\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.155350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-ovndb-tls-certs\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.164131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-httpd-config\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.164408 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" podStartSLOduration=3.164386642 podStartE2EDuration="3.164386642s" podCreationTimestamp="2025-11-22 09:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:23:12.128157533 +0000 UTC m=+7953.969580539" watchObservedRunningTime="2025-11-22 09:23:12.164386642 +0000 UTC m=+7954.005809648" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.171122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxrx\" (UniqueName: \"kubernetes.io/projected/d38ef80a-bbad-4072-a37b-1e355a943447-kube-api-access-xsxrx\") pod \"neutron-7dc86c6f7-88xlp\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.277223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:12 crc kubenswrapper[4858]: I1122 09:23:12.795913 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc86c6f7-88xlp"] Nov 22 09:23:12 crc kubenswrapper[4858]: W1122 09:23:12.801634 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38ef80a_bbad_4072_a37b_1e355a943447.slice/crio-c516c0accc10540d6d5055e19e85dee17c6755eef19e87758313171f9011f512 WatchSource:0}: Error finding container c516c0accc10540d6d5055e19e85dee17c6755eef19e87758313171f9011f512: Status 404 returned error can't find the container with id c516c0accc10540d6d5055e19e85dee17c6755eef19e87758313171f9011f512 Nov 22 09:23:13 crc kubenswrapper[4858]: I1122 09:23:13.124549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc86c6f7-88xlp" event={"ID":"d38ef80a-bbad-4072-a37b-1e355a943447","Type":"ContainerStarted","Data":"412e82414159e7ac3a4aa5c2cccb641255d6bef151b2b51f1cf479bfc2da047b"} Nov 22 09:23:13 crc kubenswrapper[4858]: I1122 09:23:13.124835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc86c6f7-88xlp" event={"ID":"d38ef80a-bbad-4072-a37b-1e355a943447","Type":"ContainerStarted","Data":"c516c0accc10540d6d5055e19e85dee17c6755eef19e87758313171f9011f512"} Nov 22 09:23:14 crc kubenswrapper[4858]: I1122 09:23:14.136900 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc86c6f7-88xlp" event={"ID":"d38ef80a-bbad-4072-a37b-1e355a943447","Type":"ContainerStarted","Data":"96d625e1d523edde845f7074cc2ca87e3c4b5c2c1898cd03e2d07a4a1aab3b91"} Nov 22 09:23:14 crc kubenswrapper[4858]: I1122 09:23:14.137429 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:14 crc kubenswrapper[4858]: I1122 09:23:14.177898 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7dc86c6f7-88xlp" podStartSLOduration=3.17787426 podStartE2EDuration="3.17787426s" podCreationTimestamp="2025-11-22 09:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:23:14.168628253 +0000 UTC m=+7956.010051279" watchObservedRunningTime="2025-11-22 09:23:14.17787426 +0000 UTC m=+7956.019297266" Nov 22 09:23:15 crc kubenswrapper[4858]: I1122 09:23:15.312145 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:23:15 crc kubenswrapper[4858]: I1122 09:23:15.312649 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:23:15 crc kubenswrapper[4858]: I1122 09:23:15.312754 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:23:15 crc kubenswrapper[4858]: I1122 09:23:15.314157 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:23:15 crc kubenswrapper[4858]: I1122 09:23:15.314265 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" gracePeriod=600 Nov 22 09:23:15 crc kubenswrapper[4858]: E1122 09:23:15.453006 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:23:16 crc kubenswrapper[4858]: I1122 09:23:16.153109 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" exitCode=0 Nov 22 09:23:16 crc kubenswrapper[4858]: I1122 09:23:16.153149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417"} Nov 22 09:23:16 crc kubenswrapper[4858]: I1122 09:23:16.153211 4858 scope.go:117] "RemoveContainer" containerID="42f3d23d35406b2d38363bc66b651f22fd81645127e429253baa3074251843ed" Nov 22 09:23:16 crc kubenswrapper[4858]: I1122 09:23:16.153849 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:23:16 crc kubenswrapper[4858]: E1122 09:23:16.154265 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:23:19 crc kubenswrapper[4858]: I1122 09:23:19.666557 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:23:19 crc kubenswrapper[4858]: I1122 09:23:19.744730 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c6d587c67-t2j5z"] Nov 22 09:23:19 crc kubenswrapper[4858]: I1122 09:23:19.746757 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerName="dnsmasq-dns" containerID="cri-o://a5c5e9edcb5e3de31e1285faeff69b1a4b12b015a3ebea1d3911cea5bc782e7e" gracePeriod=10 Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.198524 4858 generic.go:334] "Generic (PLEG): container finished" podID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerID="a5c5e9edcb5e3de31e1285faeff69b1a4b12b015a3ebea1d3911cea5bc782e7e" exitCode=0 Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.198561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" event={"ID":"f8cfe71f-8556-40e0-b48b-dae7af5efc88","Type":"ContainerDied","Data":"a5c5e9edcb5e3de31e1285faeff69b1a4b12b015a3ebea1d3911cea5bc782e7e"} Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.198875 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" event={"ID":"f8cfe71f-8556-40e0-b48b-dae7af5efc88","Type":"ContainerDied","Data":"a6becda7b4d3a7fb0d6fc059e366a9ed5942b78973868f2ca3300a856db4387a"} Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.198889 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6becda7b4d3a7fb0d6fc059e366a9ed5942b78973868f2ca3300a856db4387a" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.207002 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.311173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-dns-svc\") pod \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.311487 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvcx8\" (UniqueName: \"kubernetes.io/projected/f8cfe71f-8556-40e0-b48b-dae7af5efc88-kube-api-access-qvcx8\") pod \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.311603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-nb\") pod \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.311731 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-config\") pod \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.311971 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-sb\") pod \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\" (UID: \"f8cfe71f-8556-40e0-b48b-dae7af5efc88\") " Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.318633 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8cfe71f-8556-40e0-b48b-dae7af5efc88-kube-api-access-qvcx8" (OuterVolumeSpecName: "kube-api-access-qvcx8") pod "f8cfe71f-8556-40e0-b48b-dae7af5efc88" (UID: "f8cfe71f-8556-40e0-b48b-dae7af5efc88"). InnerVolumeSpecName "kube-api-access-qvcx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.356917 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8cfe71f-8556-40e0-b48b-dae7af5efc88" (UID: "f8cfe71f-8556-40e0-b48b-dae7af5efc88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.357114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8cfe71f-8556-40e0-b48b-dae7af5efc88" (UID: "f8cfe71f-8556-40e0-b48b-dae7af5efc88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.363044 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-config" (OuterVolumeSpecName: "config") pod "f8cfe71f-8556-40e0-b48b-dae7af5efc88" (UID: "f8cfe71f-8556-40e0-b48b-dae7af5efc88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.366200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8cfe71f-8556-40e0-b48b-dae7af5efc88" (UID: "f8cfe71f-8556-40e0-b48b-dae7af5efc88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.414113 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.414279 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.414348 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvcx8\" (UniqueName: \"kubernetes.io/projected/f8cfe71f-8556-40e0-b48b-dae7af5efc88-kube-api-access-qvcx8\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.414403 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:20 crc kubenswrapper[4858]: I1122 09:23:20.414453 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8cfe71f-8556-40e0-b48b-dae7af5efc88-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:21 crc kubenswrapper[4858]: I1122 09:23:21.205517 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c6d587c67-t2j5z" Nov 22 09:23:21 crc kubenswrapper[4858]: I1122 09:23:21.253286 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c6d587c67-t2j5z"] Nov 22 09:23:21 crc kubenswrapper[4858]: I1122 09:23:21.260055 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c6d587c67-t2j5z"] Nov 22 09:23:21 crc kubenswrapper[4858]: I1122 09:23:21.546811 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" path="/var/lib/kubelet/pods/f8cfe71f-8556-40e0-b48b-dae7af5efc88/volumes" Nov 22 09:23:30 crc kubenswrapper[4858]: I1122 09:23:30.536411 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:23:30 crc kubenswrapper[4858]: E1122 09:23:30.537547 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:23:39 crc kubenswrapper[4858]: I1122 09:23:39.795874 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:42 crc kubenswrapper[4858]: I1122 09:23:42.304597 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:23:42 crc kubenswrapper[4858]: I1122 09:23:42.367093 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-658f9d84-hnspq"] Nov 22 09:23:42 crc kubenswrapper[4858]: I1122 09:23:42.367369 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-658f9d84-hnspq" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-api" containerID="cri-o://9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc" gracePeriod=30 Nov 22 09:23:42 crc kubenswrapper[4858]: I1122 09:23:42.367776 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-658f9d84-hnspq" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-httpd" containerID="cri-o://ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2" gracePeriod=30 Nov 22 09:23:42 crc kubenswrapper[4858]: I1122 09:23:42.536548 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:23:42 crc kubenswrapper[4858]: E1122 09:23:42.536855 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:23:43 crc kubenswrapper[4858]: I1122 09:23:43.407519 4858 generic.go:334] "Generic (PLEG): container finished" podID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerID="ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2" exitCode=0 Nov 22 09:23:43 crc kubenswrapper[4858]: I1122 09:23:43.407803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658f9d84-hnspq" event={"ID":"7f5a35c3-7712-473b-8eb2-b338189529b8","Type":"ContainerDied","Data":"ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2"} Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.122613 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.297310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-config\") pod \"7f5a35c3-7712-473b-8eb2-b338189529b8\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.297469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwsg\" (UniqueName: \"kubernetes.io/projected/7f5a35c3-7712-473b-8eb2-b338189529b8-kube-api-access-xtwsg\") pod \"7f5a35c3-7712-473b-8eb2-b338189529b8\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.297575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-combined-ca-bundle\") pod \"7f5a35c3-7712-473b-8eb2-b338189529b8\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.298015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-ovndb-tls-certs\") pod \"7f5a35c3-7712-473b-8eb2-b338189529b8\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.298123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-httpd-config\") pod \"7f5a35c3-7712-473b-8eb2-b338189529b8\" (UID: \"7f5a35c3-7712-473b-8eb2-b338189529b8\") " Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.302760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5a35c3-7712-473b-8eb2-b338189529b8-kube-api-access-xtwsg" (OuterVolumeSpecName: "kube-api-access-xtwsg") pod "7f5a35c3-7712-473b-8eb2-b338189529b8" (UID: "7f5a35c3-7712-473b-8eb2-b338189529b8"). InnerVolumeSpecName "kube-api-access-xtwsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.302970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7f5a35c3-7712-473b-8eb2-b338189529b8" (UID: "7f5a35c3-7712-473b-8eb2-b338189529b8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.353972 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f5a35c3-7712-473b-8eb2-b338189529b8" (UID: "7f5a35c3-7712-473b-8eb2-b338189529b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.361909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-config" (OuterVolumeSpecName: "config") pod "7f5a35c3-7712-473b-8eb2-b338189529b8" (UID: "7f5a35c3-7712-473b-8eb2-b338189529b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.379256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7f5a35c3-7712-473b-8eb2-b338189529b8" (UID: "7f5a35c3-7712-473b-8eb2-b338189529b8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.400347 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.400392 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwsg\" (UniqueName: \"kubernetes.io/projected/7f5a35c3-7712-473b-8eb2-b338189529b8-kube-api-access-xtwsg\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.400407 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.400450 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.400463 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f5a35c3-7712-473b-8eb2-b338189529b8-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.426441 4858 generic.go:334] "Generic (PLEG): container finished" podID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerID="9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc" exitCode=0 Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.426493 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658f9d84-hnspq" event={"ID":"7f5a35c3-7712-473b-8eb2-b338189529b8","Type":"ContainerDied","Data":"9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc"} Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.426531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-658f9d84-hnspq" event={"ID":"7f5a35c3-7712-473b-8eb2-b338189529b8","Type":"ContainerDied","Data":"2fdb06b13f04450f1288337db2ccba9131fbe40eac36d97dcfba7e2c6bbd1b5f"} Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.426552 4858 scope.go:117] "RemoveContainer" containerID="ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.426557 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-658f9d84-hnspq" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.445560 4858 scope.go:117] "RemoveContainer" containerID="9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.469931 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-658f9d84-hnspq"] Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.479222 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-658f9d84-hnspq"] Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.480121 4858 scope.go:117] "RemoveContainer" containerID="ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2" Nov 22 09:23:45 crc kubenswrapper[4858]: E1122 09:23:45.480907 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2\": container with ID starting with ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2 not found: ID does not exist" containerID="ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.480993 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2"} err="failed to get container status \"ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2\": rpc error: code = NotFound desc = could not find container \"ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2\": container with ID starting with ec6f3e3009f5538569317f54b0d29548ef20e969c86fb66f18e083d78fcc9bc2 not found: ID does not exist" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.481204 4858 scope.go:117] "RemoveContainer" containerID="9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc" Nov 22 09:23:45 crc kubenswrapper[4858]: E1122 09:23:45.481658 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc\": container with ID starting with 9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc not found: ID does not exist" containerID="9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.481697 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc"} err="failed to get container status \"9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc\": rpc error: code = NotFound desc = could not find container \"9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc\": container with ID starting with 9580a2fddea27f33d8806314617f49d2be1e845fd488bc0975e12aea19206bcc not found: ID does not exist" Nov 22 09:23:45 crc kubenswrapper[4858]: I1122 09:23:45.551169 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" path="/var/lib/kubelet/pods/7f5a35c3-7712-473b-8eb2-b338189529b8/volumes" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.778230 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qkfjc"] Nov 22 09:23:51 crc kubenswrapper[4858]: E1122 09:23:51.779259 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerName="init" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779270 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerName="init" Nov 22 09:23:51 crc kubenswrapper[4858]: E1122 09:23:51.779287 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerName="dnsmasq-dns" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779293 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerName="dnsmasq-dns" Nov 22 09:23:51 crc kubenswrapper[4858]: E1122 09:23:51.779310 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-httpd" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779332 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-httpd" Nov 22 09:23:51 crc kubenswrapper[4858]: E1122 09:23:51.779343 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-api" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779350 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-api" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779503 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-api" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779514 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5a35c3-7712-473b-8eb2-b338189529b8" containerName="neutron-httpd" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.779536 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8cfe71f-8556-40e0-b48b-dae7af5efc88" containerName="dnsmasq-dns" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.780193 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.783072 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.783094 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.783103 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.787124 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7q8vr" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.787641 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.803457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qkfjc"] Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.847950 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hpwgh"] Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.849117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.861381 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qkfjc"] Nov 22 09:23:51 crc kubenswrapper[4858]: E1122 09:23:51.862030 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-jxjgz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-qkfjc" podUID="3350d27e-f278-41c0-b25c-ac26c0cb157d" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.878883 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hpwgh"] Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.915518 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5857d96d95-pp5rx"] Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.917454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-swiftconf\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918292 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-scripts\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-dispersionconf\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-ring-data-devices\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918572 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3350d27e-f278-41c0-b25c-ac26c0cb157d-etc-swift\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918711 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-combined-ca-bundle\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.918800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxjgz\" (UniqueName: \"kubernetes.io/projected/3350d27e-f278-41c0-b25c-ac26c0cb157d-kube-api-access-jxjgz\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:51 crc kubenswrapper[4858]: I1122 09:23:51.946292 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5857d96d95-pp5rx"] Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-dns-svc\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-dispersionconf\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022202 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-config\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-combined-ca-bundle\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxjgz\" (UniqueName: \"kubernetes.io/projected/3350d27e-f278-41c0-b25c-ac26c0cb157d-kube-api-access-jxjgz\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-nb\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022410 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-sb\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-etc-swift\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjnd\" (UniqueName: \"kubernetes.io/projected/808e0c25-f712-4fc1-b615-a7f7f9dffa27-kube-api-access-hvjnd\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022643 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lcg\" (UniqueName: \"kubernetes.io/projected/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-kube-api-access-89lcg\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-ring-data-devices\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022831 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-swiftconf\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-scripts\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-dispersionconf\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.022953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-ring-data-devices\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.023019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3350d27e-f278-41c0-b25c-ac26c0cb157d-etc-swift\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.023057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-scripts\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.023082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-swiftconf\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.023741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-scripts\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.024060 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3350d27e-f278-41c0-b25c-ac26c0cb157d-etc-swift\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.024346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-ring-data-devices\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.030074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-combined-ca-bundle\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.044965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-swiftconf\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.045299 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-dispersionconf\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.048841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxjgz\" (UniqueName: \"kubernetes.io/projected/3350d27e-f278-41c0-b25c-ac26c0cb157d-kube-api-access-jxjgz\") pod \"swift-ring-rebalance-qkfjc\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.125806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-dns-svc\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.125909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-dispersionconf\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.125942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-config\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-nb\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126038 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-sb\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-etc-swift\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjnd\" (UniqueName: \"kubernetes.io/projected/808e0c25-f712-4fc1-b615-a7f7f9dffa27-kube-api-access-hvjnd\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126138 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89lcg\" (UniqueName: \"kubernetes.io/projected/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-kube-api-access-89lcg\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-ring-data-devices\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126222 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-scripts\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126242 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-swiftconf\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.126858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-dns-svc\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.127288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-etc-swift\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.127412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-config\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.127583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-scripts\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.127644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-sb\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.127648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-ring-data-devices\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.128111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-nb\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.129248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-dispersionconf\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.129905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.130811 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-swiftconf\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.145295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjnd\" (UniqueName: \"kubernetes.io/projected/808e0c25-f712-4fc1-b615-a7f7f9dffa27-kube-api-access-hvjnd\") pod \"dnsmasq-dns-5857d96d95-pp5rx\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.150283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89lcg\" (UniqueName: \"kubernetes.io/projected/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-kube-api-access-89lcg\") pod \"swift-ring-rebalance-hpwgh\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.170489 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.237604 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.512742 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.526884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.633916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxjgz\" (UniqueName: \"kubernetes.io/projected/3350d27e-f278-41c0-b25c-ac26c0cb157d-kube-api-access-jxjgz\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.633987 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-dispersionconf\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.634015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-swiftconf\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.634044 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-scripts\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.634093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3350d27e-f278-41c0-b25c-ac26c0cb157d-etc-swift\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.634127 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-combined-ca-bundle\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.634149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-ring-data-devices\") pod \"3350d27e-f278-41c0-b25c-ac26c0cb157d\" (UID: \"3350d27e-f278-41c0-b25c-ac26c0cb157d\") " Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.635850 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-scripts" (OuterVolumeSpecName: "scripts") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.649531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3350d27e-f278-41c0-b25c-ac26c0cb157d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.649769 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.650464 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.651568 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.651849 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3350d27e-f278-41c0-b25c-ac26c0cb157d-kube-api-access-jxjgz" (OuterVolumeSpecName: "kube-api-access-jxjgz") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "kube-api-access-jxjgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.655739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3350d27e-f278-41c0-b25c-ac26c0cb157d" (UID: "3350d27e-f278-41c0-b25c-ac26c0cb157d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.691765 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hpwgh"] Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736293 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3350d27e-f278-41c0-b25c-ac26c0cb157d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736376 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736388 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736398 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxjgz\" (UniqueName: \"kubernetes.io/projected/3350d27e-f278-41c0-b25c-ac26c0cb157d-kube-api-access-jxjgz\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736406 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736414 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3350d27e-f278-41c0-b25c-ac26c0cb157d-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.736424 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3350d27e-f278-41c0-b25c-ac26c0cb157d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:52 crc kubenswrapper[4858]: I1122 09:23:52.794723 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5857d96d95-pp5rx"] Nov 22 09:23:52 crc kubenswrapper[4858]: W1122 09:23:52.799580 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod808e0c25_f712_4fc1_b615_a7f7f9dffa27.slice/crio-a127f6c45c9b8b0b5114cd87e6ecc382c835f1ed5f482762ba8f7ef235733946 WatchSource:0}: Error finding container a127f6c45c9b8b0b5114cd87e6ecc382c835f1ed5f482762ba8f7ef235733946: Status 404 returned error can't find the container with id a127f6c45c9b8b0b5114cd87e6ecc382c835f1ed5f482762ba8f7ef235733946 Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.522880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hpwgh" event={"ID":"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3","Type":"ContainerStarted","Data":"5f5a8b39266cb3d11838b32d467cebc84b8d9237ba7bad92c88c45cd52e10c3f"} Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.525941 4858 generic.go:334] "Generic (PLEG): container finished" podID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerID="5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df" exitCode=0 Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.526136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qkfjc" Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.533725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" event={"ID":"808e0c25-f712-4fc1-b615-a7f7f9dffa27","Type":"ContainerDied","Data":"5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df"} Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.533776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" event={"ID":"808e0c25-f712-4fc1-b615-a7f7f9dffa27","Type":"ContainerStarted","Data":"a127f6c45c9b8b0b5114cd87e6ecc382c835f1ed5f482762ba8f7ef235733946"} Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.706910 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qkfjc"] Nov 22 09:23:53 crc kubenswrapper[4858]: I1122 09:23:53.729182 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-qkfjc"] Nov 22 09:23:54 crc kubenswrapper[4858]: I1122 09:23:54.535134 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:23:54 crc kubenswrapper[4858]: E1122 09:23:54.535618 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:23:54 crc kubenswrapper[4858]: I1122 09:23:54.539707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" event={"ID":"808e0c25-f712-4fc1-b615-a7f7f9dffa27","Type":"ContainerStarted","Data":"2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79"} Nov 22 09:23:54 crc kubenswrapper[4858]: I1122 09:23:54.539992 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:23:54 crc kubenswrapper[4858]: I1122 09:23:54.559810 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" podStartSLOduration=3.559787649 podStartE2EDuration="3.559787649s" podCreationTimestamp="2025-11-22 09:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:23:54.556451622 +0000 UTC m=+7996.397874638" watchObservedRunningTime="2025-11-22 09:23:54.559787649 +0000 UTC m=+7996.401210655" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.437604 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-579546c64d-fkr76"] Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.439711 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.442282 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.445471 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.446861 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.457518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-579546c64d-fkr76"] Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.556258 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3350d27e-f278-41c0-b25c-ac26c0cb157d" path="/var/lib/kubelet/pods/3350d27e-f278-41c0-b25c-ac26c0cb157d/volumes" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.602423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-combined-ca-bundle\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.602488 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-log-httpd\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.602545 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-run-httpd\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.602722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-config-data\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.602940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-etc-swift\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.603000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6hc\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-kube-api-access-jk6hc\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.603036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-internal-tls-certs\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.603183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-public-tls-certs\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.705579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-combined-ca-bundle\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.705849 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-log-httpd\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-run-httpd\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-config-data\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-log-httpd\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-etc-swift\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-run-httpd\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk6hc\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-kube-api-access-jk6hc\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-internal-tls-certs\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.706914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-public-tls-certs\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.711380 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-public-tls-certs\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.713480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-config-data\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.714516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-combined-ca-bundle\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.715428 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-etc-swift\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.729111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-internal-tls-certs\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.731209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk6hc\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-kube-api-access-jk6hc\") pod \"swift-proxy-579546c64d-fkr76\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:55 crc kubenswrapper[4858]: I1122 09:23:55.758360 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:57 crc kubenswrapper[4858]: I1122 09:23:57.587545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hpwgh" event={"ID":"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3","Type":"ContainerStarted","Data":"038e302d3860418f90b2b7e4958cf548fa4093b4d69d684bd726e1fdd1a9fbf2"} Nov 22 09:23:57 crc kubenswrapper[4858]: I1122 09:23:57.604437 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hpwgh" podStartSLOduration=2.22869468 podStartE2EDuration="6.604419805s" podCreationTimestamp="2025-11-22 09:23:51 +0000 UTC" firstStartedPulling="2025-11-22 09:23:52.701443746 +0000 UTC m=+7994.542866752" lastFinishedPulling="2025-11-22 09:23:57.077168871 +0000 UTC m=+7998.918591877" observedRunningTime="2025-11-22 09:23:57.602843885 +0000 UTC m=+7999.444266911" watchObservedRunningTime="2025-11-22 09:23:57.604419805 +0000 UTC m=+7999.445842811" Nov 22 09:23:57 crc kubenswrapper[4858]: I1122 09:23:57.655406 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-579546c64d-fkr76"] Nov 22 09:23:58 crc kubenswrapper[4858]: I1122 09:23:58.598544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-579546c64d-fkr76" event={"ID":"9926527b-80a8-4a26-bc82-053200dbb73f","Type":"ContainerStarted","Data":"f4d10c4811595086f1768850e4ba22ed889daba484227d3c6748462dbd9d902b"} Nov 22 09:23:58 crc kubenswrapper[4858]: I1122 09:23:58.598878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-579546c64d-fkr76" event={"ID":"9926527b-80a8-4a26-bc82-053200dbb73f","Type":"ContainerStarted","Data":"47732564dcbee4396779be68097333bf6ceab57ebc135dff5097a79c851b70b2"} Nov 22 09:23:58 crc kubenswrapper[4858]: I1122 09:23:58.598889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-579546c64d-fkr76" event={"ID":"9926527b-80a8-4a26-bc82-053200dbb73f","Type":"ContainerStarted","Data":"a1ef9ed064dd54abc9efff79d9dac23ea89e2cdcbd12b540f9bfffe0a4b59651"} Nov 22 09:23:58 crc kubenswrapper[4858]: I1122 09:23:58.598918 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:58 crc kubenswrapper[4858]: I1122 09:23:58.598933 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:23:58 crc kubenswrapper[4858]: I1122 09:23:58.638616 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-579546c64d-fkr76" podStartSLOduration=3.638594312 podStartE2EDuration="3.638594312s" podCreationTimestamp="2025-11-22 09:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:23:58.622166256 +0000 UTC m=+8000.463589252" watchObservedRunningTime="2025-11-22 09:23:58.638594312 +0000 UTC m=+8000.480017318" Nov 22 09:24:01 crc kubenswrapper[4858]: I1122 09:24:01.628487 4858 generic.go:334] "Generic (PLEG): container finished" podID="a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" containerID="038e302d3860418f90b2b7e4958cf548fa4093b4d69d684bd726e1fdd1a9fbf2" exitCode=0 Nov 22 09:24:01 crc kubenswrapper[4858]: I1122 09:24:01.628578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hpwgh" event={"ID":"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3","Type":"ContainerDied","Data":"038e302d3860418f90b2b7e4958cf548fa4093b4d69d684bd726e1fdd1a9fbf2"} Nov 22 09:24:02 crc kubenswrapper[4858]: I1122 09:24:02.239578 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:24:02 crc kubenswrapper[4858]: I1122 09:24:02.307375 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f47b9c95-b9nkm"] Nov 22 09:24:02 crc kubenswrapper[4858]: I1122 09:24:02.307641 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" containerName="dnsmasq-dns" containerID="cri-o://25729f4f19af985cdfef368bfbe77c283d73f25ed8077deb863daa91b736f94c" gracePeriod=10 Nov 22 09:24:02 crc kubenswrapper[4858]: I1122 09:24:02.663662 4858 generic.go:334] "Generic (PLEG): container finished" podID="48f25b67-d57e-476a-ae32-bed8363c1865" containerID="25729f4f19af985cdfef368bfbe77c283d73f25ed8077deb863daa91b736f94c" exitCode=0 Nov 22 09:24:02 crc kubenswrapper[4858]: I1122 09:24:02.664184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" event={"ID":"48f25b67-d57e-476a-ae32-bed8363c1865","Type":"ContainerDied","Data":"25729f4f19af985cdfef368bfbe77c283d73f25ed8077deb863daa91b736f94c"} Nov 22 09:24:02 crc kubenswrapper[4858]: I1122 09:24:02.929875 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.058269 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-nb\") pod \"48f25b67-d57e-476a-ae32-bed8363c1865\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.058390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6fm\" (UniqueName: \"kubernetes.io/projected/48f25b67-d57e-476a-ae32-bed8363c1865-kube-api-access-4r6fm\") pod \"48f25b67-d57e-476a-ae32-bed8363c1865\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.058452 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-sb\") pod \"48f25b67-d57e-476a-ae32-bed8363c1865\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.058469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-config\") pod \"48f25b67-d57e-476a-ae32-bed8363c1865\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.058613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-dns-svc\") pod \"48f25b67-d57e-476a-ae32-bed8363c1865\" (UID: \"48f25b67-d57e-476a-ae32-bed8363c1865\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.064879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f25b67-d57e-476a-ae32-bed8363c1865-kube-api-access-4r6fm" (OuterVolumeSpecName: "kube-api-access-4r6fm") pod "48f25b67-d57e-476a-ae32-bed8363c1865" (UID: "48f25b67-d57e-476a-ae32-bed8363c1865"). InnerVolumeSpecName "kube-api-access-4r6fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.107158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48f25b67-d57e-476a-ae32-bed8363c1865" (UID: "48f25b67-d57e-476a-ae32-bed8363c1865"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.126210 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-config" (OuterVolumeSpecName: "config") pod "48f25b67-d57e-476a-ae32-bed8363c1865" (UID: "48f25b67-d57e-476a-ae32-bed8363c1865"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.141128 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48f25b67-d57e-476a-ae32-bed8363c1865" (UID: "48f25b67-d57e-476a-ae32-bed8363c1865"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.151308 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48f25b67-d57e-476a-ae32-bed8363c1865" (UID: "48f25b67-d57e-476a-ae32-bed8363c1865"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.162447 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.162621 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6fm\" (UniqueName: \"kubernetes.io/projected/48f25b67-d57e-476a-ae32-bed8363c1865-kube-api-access-4r6fm\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.162708 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.162785 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.162860 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f25b67-d57e-476a-ae32-bed8363c1865-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.178303 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264061 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-dispersionconf\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-combined-ca-bundle\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264502 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-ring-data-devices\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-etc-swift\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-scripts\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89lcg\" (UniqueName: \"kubernetes.io/projected/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-kube-api-access-89lcg\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.264950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-swiftconf\") pod \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\" (UID: \"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3\") " Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.266611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.266924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.268674 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-kube-api-access-89lcg" (OuterVolumeSpecName: "kube-api-access-89lcg") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "kube-api-access-89lcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.271356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.308388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.308996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-scripts" (OuterVolumeSpecName: "scripts") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.310253 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" (UID: "a975fe9f-1fb8-4c7a-b88b-fb806065a5f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367411 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367438 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367447 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367456 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367464 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367475 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89lcg\" (UniqueName: \"kubernetes.io/projected/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-kube-api-access-89lcg\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.367487 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.678599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hpwgh" event={"ID":"a975fe9f-1fb8-4c7a-b88b-fb806065a5f3","Type":"ContainerDied","Data":"5f5a8b39266cb3d11838b32d467cebc84b8d9237ba7bad92c88c45cd52e10c3f"} Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.680509 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5a8b39266cb3d11838b32d467cebc84b8d9237ba7bad92c88c45cd52e10c3f" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.678694 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hpwgh" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.681868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" event={"ID":"48f25b67-d57e-476a-ae32-bed8363c1865","Type":"ContainerDied","Data":"3ff28e474dbec7b353b31e79a1c80d016a1b29c719a53a387eb73d1700378558"} Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.681944 4858 scope.go:117] "RemoveContainer" containerID="25729f4f19af985cdfef368bfbe77c283d73f25ed8077deb863daa91b736f94c" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.681941 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f47b9c95-b9nkm" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.723585 4858 scope.go:117] "RemoveContainer" containerID="022ec0c7a87e6be2451dc8fdc102e78da1685ffde242cd65e49c9cfef3597917" Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.734718 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f47b9c95-b9nkm"] Nov 22 09:24:03 crc kubenswrapper[4858]: I1122 09:24:03.746007 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85f47b9c95-b9nkm"] Nov 22 09:24:05 crc kubenswrapper[4858]: I1122 09:24:05.536612 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:24:05 crc kubenswrapper[4858]: E1122 09:24:05.537646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:24:05 crc kubenswrapper[4858]: I1122 09:24:05.548734 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" path="/var/lib/kubelet/pods/48f25b67-d57e-476a-ae32-bed8363c1865/volumes" Nov 22 09:24:05 crc kubenswrapper[4858]: I1122 09:24:05.764516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:24:05 crc kubenswrapper[4858]: I1122 09:24:05.768411 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.610313 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ddns8"] Nov 22 09:24:11 crc kubenswrapper[4858]: E1122 09:24:11.611283 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" containerName="init" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.611299 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" containerName="init" Nov 22 09:24:11 crc kubenswrapper[4858]: E1122 09:24:11.611330 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" containerName="swift-ring-rebalance" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.611336 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" containerName="swift-ring-rebalance" Nov 22 09:24:11 crc kubenswrapper[4858]: E1122 09:24:11.611365 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" containerName="dnsmasq-dns" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.611371 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" containerName="dnsmasq-dns" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.611521 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" containerName="swift-ring-rebalance" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.611541 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f25b67-d57e-476a-ae32-bed8363c1865" containerName="dnsmasq-dns" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.615628 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.625591 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ddns8"] Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.712533 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a285-account-create-2pfdm"] Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.713879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.716929 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.734380 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a285-account-create-2pfdm"] Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.762830 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ntx\" (UniqueName: \"kubernetes.io/projected/b71ed938-4f65-4b94-8a56-d6d02a2b985a-kube-api-access-98ntx\") pod \"cinder-db-create-ddns8\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.762981 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71ed938-4f65-4b94-8a56-d6d02a2b985a-operator-scripts\") pod \"cinder-db-create-ddns8\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.864807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd1ad2-5162-4665-8b22-899141e7b863-operator-scripts\") pod \"cinder-a285-account-create-2pfdm\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.864894 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prx2b\" (UniqueName: \"kubernetes.io/projected/f1bd1ad2-5162-4665-8b22-899141e7b863-kube-api-access-prx2b\") pod \"cinder-a285-account-create-2pfdm\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.864929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71ed938-4f65-4b94-8a56-d6d02a2b985a-operator-scripts\") pod \"cinder-db-create-ddns8\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.865642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98ntx\" (UniqueName: \"kubernetes.io/projected/b71ed938-4f65-4b94-8a56-d6d02a2b985a-kube-api-access-98ntx\") pod \"cinder-db-create-ddns8\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.866089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71ed938-4f65-4b94-8a56-d6d02a2b985a-operator-scripts\") pod \"cinder-db-create-ddns8\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.884877 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98ntx\" (UniqueName: \"kubernetes.io/projected/b71ed938-4f65-4b94-8a56-d6d02a2b985a-kube-api-access-98ntx\") pod \"cinder-db-create-ddns8\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.943008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.969068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd1ad2-5162-4665-8b22-899141e7b863-operator-scripts\") pod \"cinder-a285-account-create-2pfdm\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.969151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prx2b\" (UniqueName: \"kubernetes.io/projected/f1bd1ad2-5162-4665-8b22-899141e7b863-kube-api-access-prx2b\") pod \"cinder-a285-account-create-2pfdm\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.970131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd1ad2-5162-4665-8b22-899141e7b863-operator-scripts\") pod \"cinder-a285-account-create-2pfdm\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:11 crc kubenswrapper[4858]: I1122 09:24:11.987950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prx2b\" (UniqueName: \"kubernetes.io/projected/f1bd1ad2-5162-4665-8b22-899141e7b863-kube-api-access-prx2b\") pod \"cinder-a285-account-create-2pfdm\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.032186 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.440110 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ddns8"] Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.546200 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a285-account-create-2pfdm"] Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.786173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ddns8" event={"ID":"b71ed938-4f65-4b94-8a56-d6d02a2b985a","Type":"ContainerStarted","Data":"a76bbca941c73db5ca0e85403eaaf4dff4c2af41c5efac0bd7e7594f50fed4e9"} Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.786247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ddns8" event={"ID":"b71ed938-4f65-4b94-8a56-d6d02a2b985a","Type":"ContainerStarted","Data":"f1339af8df204c77ce57447854a6a5ad31c7399df808ca6fc8132fb84dc93e42"} Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.788748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a285-account-create-2pfdm" event={"ID":"f1bd1ad2-5162-4665-8b22-899141e7b863","Type":"ContainerStarted","Data":"aa05c2acedc0afa0f0d351bcd018b68acf3603e904b1fd67998bc6eb335d1a33"} Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.809223 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ddns8" podStartSLOduration=1.8091911779999998 podStartE2EDuration="1.809191178s" podCreationTimestamp="2025-11-22 09:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:24:12.802828124 +0000 UTC m=+8014.644251140" watchObservedRunningTime="2025-11-22 09:24:12.809191178 +0000 UTC m=+8014.650614184" Nov 22 09:24:12 crc kubenswrapper[4858]: I1122 09:24:12.819705 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a285-account-create-2pfdm" podStartSLOduration=1.819671333 podStartE2EDuration="1.819671333s" podCreationTimestamp="2025-11-22 09:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:24:12.814316192 +0000 UTC m=+8014.655739218" watchObservedRunningTime="2025-11-22 09:24:12.819671333 +0000 UTC m=+8014.661094339" Nov 22 09:24:13 crc kubenswrapper[4858]: I1122 09:24:13.797646 4858 generic.go:334] "Generic (PLEG): container finished" podID="f1bd1ad2-5162-4665-8b22-899141e7b863" containerID="60d399c736bb2977373cf9f4b26babac25980176be9375e8175f9d98f4168467" exitCode=0 Nov 22 09:24:13 crc kubenswrapper[4858]: I1122 09:24:13.797741 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a285-account-create-2pfdm" event={"ID":"f1bd1ad2-5162-4665-8b22-899141e7b863","Type":"ContainerDied","Data":"60d399c736bb2977373cf9f4b26babac25980176be9375e8175f9d98f4168467"} Nov 22 09:24:13 crc kubenswrapper[4858]: I1122 09:24:13.799357 4858 generic.go:334] "Generic (PLEG): container finished" podID="b71ed938-4f65-4b94-8a56-d6d02a2b985a" containerID="a76bbca941c73db5ca0e85403eaaf4dff4c2af41c5efac0bd7e7594f50fed4e9" exitCode=0 Nov 22 09:24:13 crc kubenswrapper[4858]: I1122 09:24:13.799391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ddns8" event={"ID":"b71ed938-4f65-4b94-8a56-d6d02a2b985a","Type":"ContainerDied","Data":"a76bbca941c73db5ca0e85403eaaf4dff4c2af41c5efac0bd7e7594f50fed4e9"} Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.341043 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.348008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.445465 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98ntx\" (UniqueName: \"kubernetes.io/projected/b71ed938-4f65-4b94-8a56-d6d02a2b985a-kube-api-access-98ntx\") pod \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.445921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd1ad2-5162-4665-8b22-899141e7b863-operator-scripts\") pod \"f1bd1ad2-5162-4665-8b22-899141e7b863\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.446765 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71ed938-4f65-4b94-8a56-d6d02a2b985a-operator-scripts\") pod \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\" (UID: \"b71ed938-4f65-4b94-8a56-d6d02a2b985a\") " Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.446850 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prx2b\" (UniqueName: \"kubernetes.io/projected/f1bd1ad2-5162-4665-8b22-899141e7b863-kube-api-access-prx2b\") pod \"f1bd1ad2-5162-4665-8b22-899141e7b863\" (UID: \"f1bd1ad2-5162-4665-8b22-899141e7b863\") " Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.446561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bd1ad2-5162-4665-8b22-899141e7b863-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1bd1ad2-5162-4665-8b22-899141e7b863" (UID: "f1bd1ad2-5162-4665-8b22-899141e7b863"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.447079 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71ed938-4f65-4b94-8a56-d6d02a2b985a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b71ed938-4f65-4b94-8a56-d6d02a2b985a" (UID: "b71ed938-4f65-4b94-8a56-d6d02a2b985a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.447449 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71ed938-4f65-4b94-8a56-d6d02a2b985a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.447468 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd1ad2-5162-4665-8b22-899141e7b863-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.452902 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bd1ad2-5162-4665-8b22-899141e7b863-kube-api-access-prx2b" (OuterVolumeSpecName: "kube-api-access-prx2b") pod "f1bd1ad2-5162-4665-8b22-899141e7b863" (UID: "f1bd1ad2-5162-4665-8b22-899141e7b863"). InnerVolumeSpecName "kube-api-access-prx2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.453117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71ed938-4f65-4b94-8a56-d6d02a2b985a-kube-api-access-98ntx" (OuterVolumeSpecName: "kube-api-access-98ntx") pod "b71ed938-4f65-4b94-8a56-d6d02a2b985a" (UID: "b71ed938-4f65-4b94-8a56-d6d02a2b985a"). InnerVolumeSpecName "kube-api-access-98ntx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.548680 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prx2b\" (UniqueName: \"kubernetes.io/projected/f1bd1ad2-5162-4665-8b22-899141e7b863-kube-api-access-prx2b\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.548708 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98ntx\" (UniqueName: \"kubernetes.io/projected/b71ed938-4f65-4b94-8a56-d6d02a2b985a-kube-api-access-98ntx\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.814134 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a285-account-create-2pfdm" event={"ID":"f1bd1ad2-5162-4665-8b22-899141e7b863","Type":"ContainerDied","Data":"aa05c2acedc0afa0f0d351bcd018b68acf3603e904b1fd67998bc6eb335d1a33"} Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.814180 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa05c2acedc0afa0f0d351bcd018b68acf3603e904b1fd67998bc6eb335d1a33" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.814245 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a285-account-create-2pfdm" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.817255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ddns8" event={"ID":"b71ed938-4f65-4b94-8a56-d6d02a2b985a","Type":"ContainerDied","Data":"f1339af8df204c77ce57447854a6a5ad31c7399df808ca6fc8132fb84dc93e42"} Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.817380 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1339af8df204c77ce57447854a6a5ad31c7399df808ca6fc8132fb84dc93e42" Nov 22 09:24:15 crc kubenswrapper[4858]: I1122 09:24:15.817385 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ddns8" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.979396 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2kc8t"] Nov 22 09:24:16 crc kubenswrapper[4858]: E1122 09:24:16.979985 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bd1ad2-5162-4665-8b22-899141e7b863" containerName="mariadb-account-create" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.979996 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bd1ad2-5162-4665-8b22-899141e7b863" containerName="mariadb-account-create" Nov 22 09:24:16 crc kubenswrapper[4858]: E1122 09:24:16.980013 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71ed938-4f65-4b94-8a56-d6d02a2b985a" containerName="mariadb-database-create" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.980019 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71ed938-4f65-4b94-8a56-d6d02a2b985a" containerName="mariadb-database-create" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.980179 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1bd1ad2-5162-4665-8b22-899141e7b863" containerName="mariadb-account-create" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.980201 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71ed938-4f65-4b94-8a56-d6d02a2b985a" containerName="mariadb-database-create" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.980780 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.982716 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xn8hn" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.982822 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.983725 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 09:24:16 crc kubenswrapper[4858]: I1122 09:24:16.997030 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2kc8t"] Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.072394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-scripts\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.072697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7kcr\" (UniqueName: \"kubernetes.io/projected/2a095c1e-c781-4d40-bae8-0012c2c014c3-kube-api-access-k7kcr\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.072832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-combined-ca-bundle\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.073007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-config-data\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.073122 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a095c1e-c781-4d40-bae8-0012c2c014c3-etc-machine-id\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.073175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-db-sync-config-data\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-scripts\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7kcr\" (UniqueName: \"kubernetes.io/projected/2a095c1e-c781-4d40-bae8-0012c2c014c3-kube-api-access-k7kcr\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-combined-ca-bundle\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-config-data\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a095c1e-c781-4d40-bae8-0012c2c014c3-etc-machine-id\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-db-sync-config-data\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.174874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a095c1e-c781-4d40-bae8-0012c2c014c3-etc-machine-id\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.179748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-db-sync-config-data\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.181668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-config-data\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.182752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-scripts\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.199072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-combined-ca-bundle\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.203350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7kcr\" (UniqueName: \"kubernetes.io/projected/2a095c1e-c781-4d40-bae8-0012c2c014c3-kube-api-access-k7kcr\") pod \"cinder-db-sync-2kc8t\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.297113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.729093 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2kc8t"] Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.734910 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:24:17 crc kubenswrapper[4858]: I1122 09:24:17.833584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kc8t" event={"ID":"2a095c1e-c781-4d40-bae8-0012c2c014c3","Type":"ContainerStarted","Data":"4785366c5fc3b9015cb6585cc3d097c4ce5b7dbdcf9940931cfd3b91b8d7b4fa"} Nov 22 09:24:19 crc kubenswrapper[4858]: I1122 09:24:19.542977 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:24:19 crc kubenswrapper[4858]: E1122 09:24:19.543676 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:24:33 crc kubenswrapper[4858]: I1122 09:24:33.536712 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:24:33 crc kubenswrapper[4858]: E1122 09:24:33.537526 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:24:34 crc kubenswrapper[4858]: I1122 09:24:34.536700 4858 scope.go:117] "RemoveContainer" containerID="f0b40141f955e6ecca80e44f80ba04c41aaf32cba35716161f0d67543e4dace5" Nov 22 09:24:36 crc kubenswrapper[4858]: I1122 09:24:36.595004 4858 scope.go:117] "RemoveContainer" containerID="a510d8c3e92e51ab3946f9d37e59fb9e99c0d3fa51267b9c9a3b6b5e9b1bcd34" Nov 22 09:24:38 crc kubenswrapper[4858]: I1122 09:24:38.004166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kc8t" event={"ID":"2a095c1e-c781-4d40-bae8-0012c2c014c3","Type":"ContainerStarted","Data":"2815c5b131e046c9fe0fd6995ed7c565c3961288f63bf880f187637ffc7eb0c1"} Nov 22 09:24:38 crc kubenswrapper[4858]: I1122 09:24:38.024061 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2kc8t" podStartSLOduration=2.730594258 podStartE2EDuration="22.024042899s" podCreationTimestamp="2025-11-22 09:24:16 +0000 UTC" firstStartedPulling="2025-11-22 09:24:17.734685297 +0000 UTC m=+8019.576108303" lastFinishedPulling="2025-11-22 09:24:37.028133948 +0000 UTC m=+8038.869556944" observedRunningTime="2025-11-22 09:24:38.019467623 +0000 UTC m=+8039.860890649" watchObservedRunningTime="2025-11-22 09:24:38.024042899 +0000 UTC m=+8039.865465895" Nov 22 09:24:43 crc kubenswrapper[4858]: I1122 09:24:43.055623 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a095c1e-c781-4d40-bae8-0012c2c014c3" containerID="2815c5b131e046c9fe0fd6995ed7c565c3961288f63bf880f187637ffc7eb0c1" exitCode=0 Nov 22 09:24:43 crc kubenswrapper[4858]: I1122 09:24:43.055989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kc8t" event={"ID":"2a095c1e-c781-4d40-bae8-0012c2c014c3","Type":"ContainerDied","Data":"2815c5b131e046c9fe0fd6995ed7c565c3961288f63bf880f187637ffc7eb0c1"} Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.519467 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7kcr\" (UniqueName: \"kubernetes.io/projected/2a095c1e-c781-4d40-bae8-0012c2c014c3-kube-api-access-k7kcr\") pod \"2a095c1e-c781-4d40-bae8-0012c2c014c3\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598709 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a095c1e-c781-4d40-bae8-0012c2c014c3-etc-machine-id\") pod \"2a095c1e-c781-4d40-bae8-0012c2c014c3\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-combined-ca-bundle\") pod \"2a095c1e-c781-4d40-bae8-0012c2c014c3\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598808 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-scripts\") pod \"2a095c1e-c781-4d40-bae8-0012c2c014c3\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-config-data\") pod \"2a095c1e-c781-4d40-bae8-0012c2c014c3\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598854 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a095c1e-c781-4d40-bae8-0012c2c014c3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2a095c1e-c781-4d40-bae8-0012c2c014c3" (UID: "2a095c1e-c781-4d40-bae8-0012c2c014c3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.598932 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-db-sync-config-data\") pod \"2a095c1e-c781-4d40-bae8-0012c2c014c3\" (UID: \"2a095c1e-c781-4d40-bae8-0012c2c014c3\") " Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.599472 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a095c1e-c781-4d40-bae8-0012c2c014c3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.604452 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a095c1e-c781-4d40-bae8-0012c2c014c3-kube-api-access-k7kcr" (OuterVolumeSpecName: "kube-api-access-k7kcr") pod "2a095c1e-c781-4d40-bae8-0012c2c014c3" (UID: "2a095c1e-c781-4d40-bae8-0012c2c014c3"). InnerVolumeSpecName "kube-api-access-k7kcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.609586 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-scripts" (OuterVolumeSpecName: "scripts") pod "2a095c1e-c781-4d40-bae8-0012c2c014c3" (UID: "2a095c1e-c781-4d40-bae8-0012c2c014c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.612905 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2a095c1e-c781-4d40-bae8-0012c2c014c3" (UID: "2a095c1e-c781-4d40-bae8-0012c2c014c3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.633665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a095c1e-c781-4d40-bae8-0012c2c014c3" (UID: "2a095c1e-c781-4d40-bae8-0012c2c014c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.658234 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-config-data" (OuterVolumeSpecName: "config-data") pod "2a095c1e-c781-4d40-bae8-0012c2c014c3" (UID: "2a095c1e-c781-4d40-bae8-0012c2c014c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.701240 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.701272 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.701281 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.701290 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a095c1e-c781-4d40-bae8-0012c2c014c3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:44 crc kubenswrapper[4858]: I1122 09:24:44.701299 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7kcr\" (UniqueName: \"kubernetes.io/projected/2a095c1e-c781-4d40-bae8-0012c2c014c3-kube-api-access-k7kcr\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.073464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2kc8t" event={"ID":"2a095c1e-c781-4d40-bae8-0012c2c014c3","Type":"ContainerDied","Data":"4785366c5fc3b9015cb6585cc3d097c4ce5b7dbdcf9940931cfd3b91b8d7b4fa"} Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.073515 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4785366c5fc3b9015cb6585cc3d097c4ce5b7dbdcf9940931cfd3b91b8d7b4fa" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.073523 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2kc8t" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.376443 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f8d9dc987-64mzw"] Nov 22 09:24:45 crc kubenswrapper[4858]: E1122 09:24:45.376877 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a095c1e-c781-4d40-bae8-0012c2c014c3" containerName="cinder-db-sync" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.376903 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a095c1e-c781-4d40-bae8-0012c2c014c3" containerName="cinder-db-sync" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.377131 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a095c1e-c781-4d40-bae8-0012c2c014c3" containerName="cinder-db-sync" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.378166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.397264 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f8d9dc987-64mzw"] Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.516741 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-nb\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.516950 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-sb\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.516999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-config\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.517070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttsx\" (UniqueName: \"kubernetes.io/projected/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-kube-api-access-4ttsx\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.517199 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-dns-svc\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.536379 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:24:45 crc kubenswrapper[4858]: E1122 09:24:45.538383 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.563748 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.565707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.567816 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xn8hn" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.571852 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.573886 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.574057 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.575784 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.618551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-nb\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.618659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-sb\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.618697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-config\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.618746 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttsx\" (UniqueName: \"kubernetes.io/projected/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-kube-api-access-4ttsx\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.618804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-dns-svc\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.619790 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-dns-svc\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.619868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-nb\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.620581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-config\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.622312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-sb\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.641443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttsx\" (UniqueName: \"kubernetes.io/projected/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-kube-api-access-4ttsx\") pod \"dnsmasq-dns-f8d9dc987-64mzw\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.705394 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-logs\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725432 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-scripts\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2tp\" (UniqueName: \"kubernetes.io/projected/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-kube-api-access-7d2tp\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.725574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data-custom\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.827530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.827866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.827923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-logs\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.827951 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.827978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-scripts\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.828024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2tp\" (UniqueName: \"kubernetes.io/projected/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-kube-api-access-7d2tp\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.828111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data-custom\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.828966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-logs\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.829043 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.832614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data-custom\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.836142 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.838638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-scripts\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.853529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2tp\" (UniqueName: \"kubernetes.io/projected/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-kube-api-access-7d2tp\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.854531 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data\") pod \"cinder-api-0\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " pod="openstack/cinder-api-0" Nov 22 09:24:45 crc kubenswrapper[4858]: I1122 09:24:45.891373 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:24:46 crc kubenswrapper[4858]: I1122 09:24:46.220303 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f8d9dc987-64mzw"] Nov 22 09:24:46 crc kubenswrapper[4858]: I1122 09:24:46.227240 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:47 crc kubenswrapper[4858]: I1122 09:24:47.095647 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerID="924de53d8cd56034d35c472f63e003fb770a7a9ffe6b83c1b597c5c075bd3ca5" exitCode=0 Nov 22 09:24:47 crc kubenswrapper[4858]: I1122 09:24:47.095750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" event={"ID":"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c","Type":"ContainerDied","Data":"924de53d8cd56034d35c472f63e003fb770a7a9ffe6b83c1b597c5c075bd3ca5"} Nov 22 09:24:47 crc kubenswrapper[4858]: I1122 09:24:47.096034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" event={"ID":"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c","Type":"ContainerStarted","Data":"8825f1a7132c7a952487fbb8d881f7402fc8e7a3b8c9236287a19b86b27797bb"} Nov 22 09:24:47 crc kubenswrapper[4858]: I1122 09:24:47.098783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c2c3282-956c-4a1c-b539-2ca54ff1bafa","Type":"ContainerStarted","Data":"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23"} Nov 22 09:24:47 crc kubenswrapper[4858]: I1122 09:24:47.098821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c2c3282-956c-4a1c-b539-2ca54ff1bafa","Type":"ContainerStarted","Data":"48ab91d6792848421573443515aa492146b78432b921d95ae26169e9eea25279"} Nov 22 09:24:47 crc kubenswrapper[4858]: I1122 09:24:47.737821 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:48 crc kubenswrapper[4858]: I1122 09:24:48.109857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c2c3282-956c-4a1c-b539-2ca54ff1bafa","Type":"ContainerStarted","Data":"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961"} Nov 22 09:24:48 crc kubenswrapper[4858]: I1122 09:24:48.110994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 09:24:48 crc kubenswrapper[4858]: I1122 09:24:48.113104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" event={"ID":"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c","Type":"ContainerStarted","Data":"8d43deedb4c57520b219394fc169e7a6a3cddc5fec6e431eec98196831db3c77"} Nov 22 09:24:48 crc kubenswrapper[4858]: I1122 09:24:48.117804 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:48 crc kubenswrapper[4858]: I1122 09:24:48.148700 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.148679985 podStartE2EDuration="3.148679985s" podCreationTimestamp="2025-11-22 09:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:24:48.143117537 +0000 UTC m=+8049.984540553" watchObservedRunningTime="2025-11-22 09:24:48.148679985 +0000 UTC m=+8049.990102991" Nov 22 09:24:48 crc kubenswrapper[4858]: I1122 09:24:48.162851 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" podStartSLOduration=3.162831328 podStartE2EDuration="3.162831328s" podCreationTimestamp="2025-11-22 09:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:24:48.162310291 +0000 UTC m=+8050.003733297" watchObservedRunningTime="2025-11-22 09:24:48.162831328 +0000 UTC m=+8050.004254334" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.120652 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api" containerID="cri-o://991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961" gracePeriod=30 Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.120635 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api-log" containerID="cri-o://680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23" gracePeriod=30 Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.598721 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.702955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-etc-machine-id\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-combined-ca-bundle\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d2tp\" (UniqueName: \"kubernetes.io/projected/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-kube-api-access-7d2tp\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data-custom\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-scripts\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-logs\") pod \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\" (UID: \"8c2c3282-956c-4a1c-b539-2ca54ff1bafa\") " Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.703978 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.704125 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-logs" (OuterVolumeSpecName: "logs") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.710929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-kube-api-access-7d2tp" (OuterVolumeSpecName: "kube-api-access-7d2tp") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "kube-api-access-7d2tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.711479 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.711526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-scripts" (OuterVolumeSpecName: "scripts") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.734775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.763228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data" (OuterVolumeSpecName: "config-data") pod "8c2c3282-956c-4a1c-b539-2ca54ff1bafa" (UID: "8c2c3282-956c-4a1c-b539-2ca54ff1bafa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.806167 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d2tp\" (UniqueName: \"kubernetes.io/projected/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-kube-api-access-7d2tp\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.806206 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.806220 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.806232 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.806245 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:49 crc kubenswrapper[4858]: I1122 09:24:49.806257 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c3282-956c-4a1c-b539-2ca54ff1bafa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.129496 4858 generic.go:334] "Generic (PLEG): container finished" podID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerID="991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961" exitCode=0 Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.130533 4858 generic.go:334] "Generic (PLEG): container finished" podID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerID="680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23" exitCode=143 Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.129586 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.129557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c2c3282-956c-4a1c-b539-2ca54ff1bafa","Type":"ContainerDied","Data":"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961"} Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.130759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c2c3282-956c-4a1c-b539-2ca54ff1bafa","Type":"ContainerDied","Data":"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23"} Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.130782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c2c3282-956c-4a1c-b539-2ca54ff1bafa","Type":"ContainerDied","Data":"48ab91d6792848421573443515aa492146b78432b921d95ae26169e9eea25279"} Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.130798 4858 scope.go:117] "RemoveContainer" containerID="991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.166349 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.167047 4858 scope.go:117] "RemoveContainer" containerID="680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.173609 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.193347 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:50 crc kubenswrapper[4858]: E1122 09:24:50.193727 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api-log" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.193747 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api-log" Nov 22 09:24:50 crc kubenswrapper[4858]: E1122 09:24:50.193764 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.193771 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.193932 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api-log" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.193951 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" containerName="cinder-api" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.194897 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.200720 4858 scope.go:117] "RemoveContainer" containerID="991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961" Nov 22 09:24:50 crc kubenswrapper[4858]: E1122 09:24:50.201274 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961\": container with ID starting with 991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961 not found: ID does not exist" containerID="991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.201423 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961"} err="failed to get container status \"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961\": rpc error: code = NotFound desc = could not find container \"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961\": container with ID starting with 991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961 not found: ID does not exist" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.201533 4858 scope.go:117] "RemoveContainer" containerID="680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.202385 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.202780 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.203076 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.203656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.203710 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xn8hn" Nov 22 09:24:50 crc kubenswrapper[4858]: E1122 09:24:50.204267 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23\": container with ID starting with 680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23 not found: ID does not exist" containerID="680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.204330 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23"} err="failed to get container status \"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23\": rpc error: code = NotFound desc = could not find container \"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23\": container with ID starting with 680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23 not found: ID does not exist" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.204367 4858 scope.go:117] "RemoveContainer" containerID="991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.205441 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961"} err="failed to get container status \"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961\": rpc error: code = NotFound desc = could not find container \"991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961\": container with ID starting with 991de9552d482fa579b9e98e26090845861802f687fcdba4a96447639218f961 not found: ID does not exist" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.205467 4858 scope.go:117] "RemoveContainer" containerID="680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.205615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.206463 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23"} err="failed to get container status \"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23\": rpc error: code = NotFound desc = could not find container \"680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23\": container with ID starting with 680308064be3bc30bdb2825aef5b686e2aea8ef8be38a0af182319f2d17adc23 not found: ID does not exist" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.209293 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95d5l\" (UniqueName: \"kubernetes.io/projected/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-kube-api-access-95d5l\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314734 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-logs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.314785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-scripts\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-logs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-scripts\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416637 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95d5l\" (UniqueName: \"kubernetes.io/projected/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-kube-api-access-95d5l\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.416987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.417020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.417249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-logs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.420416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data-custom\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.420515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.421003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-scripts\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.421623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.421767 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.422086 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.433965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95d5l\" (UniqueName: \"kubernetes.io/projected/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-kube-api-access-95d5l\") pod \"cinder-api-0\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.516214 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:24:50 crc kubenswrapper[4858]: I1122 09:24:50.943849 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:24:50 crc kubenswrapper[4858]: W1122 09:24:50.947650 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded9a4741_9ca9_49cf_abed_8d26ab981d4f.slice/crio-6368f48f3511194e154d59ae69f2926ab8610a5a0b3bfd05005f77315d82dd97 WatchSource:0}: Error finding container 6368f48f3511194e154d59ae69f2926ab8610a5a0b3bfd05005f77315d82dd97: Status 404 returned error can't find the container with id 6368f48f3511194e154d59ae69f2926ab8610a5a0b3bfd05005f77315d82dd97 Nov 22 09:24:51 crc kubenswrapper[4858]: I1122 09:24:51.142437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed9a4741-9ca9-49cf-abed-8d26ab981d4f","Type":"ContainerStarted","Data":"6368f48f3511194e154d59ae69f2926ab8610a5a0b3bfd05005f77315d82dd97"} Nov 22 09:24:51 crc kubenswrapper[4858]: I1122 09:24:51.547294 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2c3282-956c-4a1c-b539-2ca54ff1bafa" path="/var/lib/kubelet/pods/8c2c3282-956c-4a1c-b539-2ca54ff1bafa/volumes" Nov 22 09:24:55 crc kubenswrapper[4858]: I1122 09:24:55.706605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:24:55 crc kubenswrapper[4858]: I1122 09:24:55.788079 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5857d96d95-pp5rx"] Nov 22 09:24:55 crc kubenswrapper[4858]: I1122 09:24:55.788561 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerName="dnsmasq-dns" containerID="cri-o://2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79" gracePeriod=10 Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.006991 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.142265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-config\") pod \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.142412 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-sb\") pod \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.142475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-dns-svc\") pod \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.142516 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvjnd\" (UniqueName: \"kubernetes.io/projected/808e0c25-f712-4fc1-b615-a7f7f9dffa27-kube-api-access-hvjnd\") pod \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.142582 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-nb\") pod \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\" (UID: \"808e0c25-f712-4fc1-b615-a7f7f9dffa27\") " Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.148974 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808e0c25-f712-4fc1-b615-a7f7f9dffa27-kube-api-access-hvjnd" (OuterVolumeSpecName: "kube-api-access-hvjnd") pod "808e0c25-f712-4fc1-b615-a7f7f9dffa27" (UID: "808e0c25-f712-4fc1-b615-a7f7f9dffa27"). InnerVolumeSpecName "kube-api-access-hvjnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.192122 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-config" (OuterVolumeSpecName: "config") pod "808e0c25-f712-4fc1-b615-a7f7f9dffa27" (UID: "808e0c25-f712-4fc1-b615-a7f7f9dffa27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.196720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "808e0c25-f712-4fc1-b615-a7f7f9dffa27" (UID: "808e0c25-f712-4fc1-b615-a7f7f9dffa27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.199079 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "808e0c25-f712-4fc1-b615-a7f7f9dffa27" (UID: "808e0c25-f712-4fc1-b615-a7f7f9dffa27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.208500 4858 generic.go:334] "Generic (PLEG): container finished" podID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerID="2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79" exitCode=0 Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.208549 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.208553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" event={"ID":"808e0c25-f712-4fc1-b615-a7f7f9dffa27","Type":"ContainerDied","Data":"2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79"} Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.208578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "808e0c25-f712-4fc1-b615-a7f7f9dffa27" (UID: "808e0c25-f712-4fc1-b615-a7f7f9dffa27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.208606 4858 scope.go:117] "RemoveContainer" containerID="2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.208593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5857d96d95-pp5rx" event={"ID":"808e0c25-f712-4fc1-b615-a7f7f9dffa27","Type":"ContainerDied","Data":"a127f6c45c9b8b0b5114cd87e6ecc382c835f1ed5f482762ba8f7ef235733946"} Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.244189 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.244224 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.244238 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.244248 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvjnd\" (UniqueName: \"kubernetes.io/projected/808e0c25-f712-4fc1-b615-a7f7f9dffa27-kube-api-access-hvjnd\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.244257 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/808e0c25-f712-4fc1-b615-a7f7f9dffa27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.287444 4858 scope.go:117] "RemoveContainer" containerID="5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.287502 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5857d96d95-pp5rx"] Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.295496 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5857d96d95-pp5rx"] Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.318380 4858 scope.go:117] "RemoveContainer" containerID="2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79" Nov 22 09:24:57 crc kubenswrapper[4858]: E1122 09:24:57.318835 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79\": container with ID starting with 2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79 not found: ID does not exist" containerID="2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.318881 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79"} err="failed to get container status \"2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79\": rpc error: code = NotFound desc = could not find container \"2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79\": container with ID starting with 2aaa413e54392a0d55ede8fb78a40c3dec7826dfd04de1cde1bc2b23f5ccac79 not found: ID does not exist" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.318912 4858 scope.go:117] "RemoveContainer" containerID="5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df" Nov 22 09:24:57 crc kubenswrapper[4858]: E1122 09:24:57.319226 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df\": container with ID starting with 5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df not found: ID does not exist" containerID="5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.319259 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df"} err="failed to get container status \"5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df\": rpc error: code = NotFound desc = could not find container \"5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df\": container with ID starting with 5194dfde219dc5d893c603d937799ac0325fcd22d6427ec7129ed454824091df not found: ID does not exist" Nov 22 09:24:57 crc kubenswrapper[4858]: I1122 09:24:57.548975 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" path="/var/lib/kubelet/pods/808e0c25-f712-4fc1-b615-a7f7f9dffa27/volumes" Nov 22 09:24:58 crc kubenswrapper[4858]: I1122 09:24:58.536474 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:24:58 crc kubenswrapper[4858]: E1122 09:24:58.537087 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:24:59 crc kubenswrapper[4858]: I1122 09:24:59.240875 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed9a4741-9ca9-49cf-abed-8d26ab981d4f","Type":"ContainerStarted","Data":"0888bd2c34d32eaea686468162e3f22fa8d53d803c57d02c5aea470c81d739a2"} Nov 22 09:24:59 crc kubenswrapper[4858]: I1122 09:24:59.241443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 09:24:59 crc kubenswrapper[4858]: I1122 09:24:59.241460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed9a4741-9ca9-49cf-abed-8d26ab981d4f","Type":"ContainerStarted","Data":"6cee190ee5906191b118eba9aab7b11dde77537c3e42353998c95be3fb341557"} Nov 22 09:24:59 crc kubenswrapper[4858]: I1122 09:24:59.267357 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.267332101 podStartE2EDuration="9.267332101s" podCreationTimestamp="2025-11-22 09:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:24:59.261436953 +0000 UTC m=+8061.102859969" watchObservedRunningTime="2025-11-22 09:24:59.267332101 +0000 UTC m=+8061.108755127" Nov 22 09:25:07 crc kubenswrapper[4858]: I1122 09:25:07.442980 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 09:25:10 crc kubenswrapper[4858]: I1122 09:25:10.535814 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:25:10 crc kubenswrapper[4858]: E1122 09:25:10.537044 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:25:25 crc kubenswrapper[4858]: I1122 09:25:25.536570 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:25:25 crc kubenswrapper[4858]: E1122 09:25:25.537625 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.308983 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:29 crc kubenswrapper[4858]: E1122 09:25:29.309520 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerName="dnsmasq-dns" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.309533 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerName="dnsmasq-dns" Nov 22 09:25:29 crc kubenswrapper[4858]: E1122 09:25:29.309552 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerName="init" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.309558 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerName="init" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.309696 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="808e0c25-f712-4fc1-b615-a7f7f9dffa27" containerName="dnsmasq-dns" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.310528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.313112 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.327480 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.440528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774hh\" (UniqueName: \"kubernetes.io/projected/86748a4d-01c0-4825-b77f-0ffc606bae9f-kube-api-access-774hh\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.440594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.440658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.440680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.440739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.440757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86748a4d-01c0-4825-b77f-0ffc606bae9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774hh\" (UniqueName: \"kubernetes.io/projected/86748a4d-01c0-4825-b77f-0ffc606bae9f-kube-api-access-774hh\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86748a4d-01c0-4825-b77f-0ffc606bae9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.542783 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86748a4d-01c0-4825-b77f-0ffc606bae9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.549081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.550466 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.553169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.561210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774hh\" (UniqueName: \"kubernetes.io/projected/86748a4d-01c0-4825-b77f-0ffc606bae9f-kube-api-access-774hh\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.564902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:29 crc kubenswrapper[4858]: I1122 09:25:29.628194 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:25:30 crc kubenswrapper[4858]: I1122 09:25:30.063490 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:30 crc kubenswrapper[4858]: I1122 09:25:30.527439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86748a4d-01c0-4825-b77f-0ffc606bae9f","Type":"ContainerStarted","Data":"55952b35ff26728be8b3f696e89ba388705a42f78b0ea26efce03dda70ca803e"} Nov 22 09:25:31 crc kubenswrapper[4858]: I1122 09:25:31.189819 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:25:31 crc kubenswrapper[4858]: I1122 09:25:31.190349 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api-log" containerID="cri-o://0888bd2c34d32eaea686468162e3f22fa8d53d803c57d02c5aea470c81d739a2" gracePeriod=30 Nov 22 09:25:31 crc kubenswrapper[4858]: I1122 09:25:31.190462 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api" containerID="cri-o://6cee190ee5906191b118eba9aab7b11dde77537c3e42353998c95be3fb341557" gracePeriod=30 Nov 22 09:25:31 crc kubenswrapper[4858]: I1122 09:25:31.542186 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerID="0888bd2c34d32eaea686468162e3f22fa8d53d803c57d02c5aea470c81d739a2" exitCode=143 Nov 22 09:25:31 crc kubenswrapper[4858]: I1122 09:25:31.547243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed9a4741-9ca9-49cf-abed-8d26ab981d4f","Type":"ContainerDied","Data":"0888bd2c34d32eaea686468162e3f22fa8d53d803c57d02c5aea470c81d739a2"} Nov 22 09:25:31 crc kubenswrapper[4858]: I1122 09:25:31.547269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86748a4d-01c0-4825-b77f-0ffc606bae9f","Type":"ContainerStarted","Data":"0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9"} Nov 22 09:25:32 crc kubenswrapper[4858]: I1122 09:25:32.557965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86748a4d-01c0-4825-b77f-0ffc606bae9f","Type":"ContainerStarted","Data":"b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d"} Nov 22 09:25:32 crc kubenswrapper[4858]: I1122 09:25:32.588500 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.212001767 podStartE2EDuration="3.588479705s" podCreationTimestamp="2025-11-22 09:25:29 +0000 UTC" firstStartedPulling="2025-11-22 09:25:30.064267744 +0000 UTC m=+8091.905690750" lastFinishedPulling="2025-11-22 09:25:30.440745662 +0000 UTC m=+8092.282168688" observedRunningTime="2025-11-22 09:25:32.578767984 +0000 UTC m=+8094.420190990" watchObservedRunningTime="2025-11-22 09:25:32.588479705 +0000 UTC m=+8094.429902711" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.584234 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerID="6cee190ee5906191b118eba9aab7b11dde77537c3e42353998c95be3fb341557" exitCode=0 Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.584341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed9a4741-9ca9-49cf-abed-8d26ab981d4f","Type":"ContainerDied","Data":"6cee190ee5906191b118eba9aab7b11dde77537c3e42353998c95be3fb341557"} Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.628917 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.735466 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845536 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-etc-machine-id\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data-custom\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-public-tls-certs\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-combined-ca-bundle\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845778 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-scripts\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.845975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-internal-tls-certs\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.846011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95d5l\" (UniqueName: \"kubernetes.io/projected/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-kube-api-access-95d5l\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.846053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-logs\") pod \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\" (UID: \"ed9a4741-9ca9-49cf-abed-8d26ab981d4f\") " Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.846717 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.847012 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-logs" (OuterVolumeSpecName: "logs") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.852577 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-kube-api-access-95d5l" (OuterVolumeSpecName: "kube-api-access-95d5l") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "kube-api-access-95d5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.852679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-scripts" (OuterVolumeSpecName: "scripts") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.864607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.886620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.903280 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.912626 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.923603 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data" (OuterVolumeSpecName: "config-data") pod "ed9a4741-9ca9-49cf-abed-8d26ab981d4f" (UID: "ed9a4741-9ca9-49cf-abed-8d26ab981d4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948838 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948883 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948899 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948909 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948919 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948929 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948940 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:34 crc kubenswrapper[4858]: I1122 09:25:34.948950 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95d5l\" (UniqueName: \"kubernetes.io/projected/ed9a4741-9ca9-49cf-abed-8d26ab981d4f-kube-api-access-95d5l\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.601390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed9a4741-9ca9-49cf-abed-8d26ab981d4f","Type":"ContainerDied","Data":"6368f48f3511194e154d59ae69f2926ab8610a5a0b3bfd05005f77315d82dd97"} Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.601476 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.601846 4858 scope.go:117] "RemoveContainer" containerID="6cee190ee5906191b118eba9aab7b11dde77537c3e42353998c95be3fb341557" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.635044 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.645762 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.645931 4858 scope.go:117] "RemoveContainer" containerID="0888bd2c34d32eaea686468162e3f22fa8d53d803c57d02c5aea470c81d739a2" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.671518 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:25:35 crc kubenswrapper[4858]: E1122 09:25:35.672185 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api-log" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.672276 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api-log" Nov 22 09:25:35 crc kubenswrapper[4858]: E1122 09:25:35.672429 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.672519 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.673215 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api-log" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.673356 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" containerName="cinder-api" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.674863 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.678006 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.678210 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.678350 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.693425 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-scripts\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764106 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764131 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zskhl\" (UniqueName: \"kubernetes.io/projected/0da6e158-7f6d-434b-bd4a-9a902a5879d9-kube-api-access-zskhl\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764301 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764346 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0da6e158-7f6d-434b-bd4a-9a902a5879d9-logs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.764409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da6e158-7f6d-434b-bd4a-9a902a5879d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866199 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zskhl\" (UniqueName: \"kubernetes.io/projected/0da6e158-7f6d-434b-bd4a-9a902a5879d9-kube-api-access-zskhl\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866504 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0da6e158-7f6d-434b-bd4a-9a902a5879d9-logs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da6e158-7f6d-434b-bd4a-9a902a5879d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-scripts\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.866979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.867726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0da6e158-7f6d-434b-bd4a-9a902a5879d9-logs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.868222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da6e158-7f6d-434b-bd4a-9a902a5879d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.871248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.871585 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-scripts\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.872077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.872921 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.873030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.878543 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:35 crc kubenswrapper[4858]: I1122 09:25:35.887819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zskhl\" (UniqueName: \"kubernetes.io/projected/0da6e158-7f6d-434b-bd4a-9a902a5879d9-kube-api-access-zskhl\") pod \"cinder-api-0\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " pod="openstack/cinder-api-0" Nov 22 09:25:36 crc kubenswrapper[4858]: I1122 09:25:36.007535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:25:36 crc kubenswrapper[4858]: I1122 09:25:36.462624 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:25:36 crc kubenswrapper[4858]: I1122 09:25:36.536484 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:25:36 crc kubenswrapper[4858]: E1122 09:25:36.536735 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:25:36 crc kubenswrapper[4858]: I1122 09:25:36.609371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0da6e158-7f6d-434b-bd4a-9a902a5879d9","Type":"ContainerStarted","Data":"94b50848184d79c77f55fd7a85d31fe058007eabd302c1a832cbb3d700404f40"} Nov 22 09:25:37 crc kubenswrapper[4858]: I1122 09:25:37.561454 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed9a4741-9ca9-49cf-abed-8d26ab981d4f" path="/var/lib/kubelet/pods/ed9a4741-9ca9-49cf-abed-8d26ab981d4f/volumes" Nov 22 09:25:37 crc kubenswrapper[4858]: I1122 09:25:37.622478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0da6e158-7f6d-434b-bd4a-9a902a5879d9","Type":"ContainerStarted","Data":"0d9c14545905e3cda2f017bb37cf1a67c2243ee303a9eec348eaebba94004931"} Nov 22 09:25:37 crc kubenswrapper[4858]: I1122 09:25:37.622549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0da6e158-7f6d-434b-bd4a-9a902a5879d9","Type":"ContainerStarted","Data":"c02bcf924af8e4c2d6ca90bd8a608ea834531a49916fa28f1e8aadbb6103b5f6"} Nov 22 09:25:38 crc kubenswrapper[4858]: I1122 09:25:38.629172 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 09:25:38 crc kubenswrapper[4858]: I1122 09:25:38.660788 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.660767845 podStartE2EDuration="3.660767845s" podCreationTimestamp="2025-11-22 09:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:25:38.653348038 +0000 UTC m=+8100.494771084" watchObservedRunningTime="2025-11-22 09:25:38.660767845 +0000 UTC m=+8100.502190841" Nov 22 09:25:39 crc kubenswrapper[4858]: I1122 09:25:39.833172 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 09:25:39 crc kubenswrapper[4858]: I1122 09:25:39.895472 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:40 crc kubenswrapper[4858]: I1122 09:25:40.646811 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="cinder-scheduler" containerID="cri-o://0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9" gracePeriod=30 Nov 22 09:25:40 crc kubenswrapper[4858]: I1122 09:25:40.646905 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="probe" containerID="cri-o://b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d" gracePeriod=30 Nov 22 09:25:41 crc kubenswrapper[4858]: I1122 09:25:41.659258 4858 generic.go:334] "Generic (PLEG): container finished" podID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerID="b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d" exitCode=0 Nov 22 09:25:41 crc kubenswrapper[4858]: I1122 09:25:41.659355 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86748a4d-01c0-4825-b77f-0ffc606bae9f","Type":"ContainerDied","Data":"b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d"} Nov 22 09:25:41 crc kubenswrapper[4858]: I1122 09:25:41.984658 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.085124 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data\") pod \"86748a4d-01c0-4825-b77f-0ffc606bae9f\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.085232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-scripts\") pod \"86748a4d-01c0-4825-b77f-0ffc606bae9f\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.085307 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-combined-ca-bundle\") pod \"86748a4d-01c0-4825-b77f-0ffc606bae9f\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.085364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-774hh\" (UniqueName: \"kubernetes.io/projected/86748a4d-01c0-4825-b77f-0ffc606bae9f-kube-api-access-774hh\") pod \"86748a4d-01c0-4825-b77f-0ffc606bae9f\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.085404 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data-custom\") pod \"86748a4d-01c0-4825-b77f-0ffc606bae9f\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.085444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86748a4d-01c0-4825-b77f-0ffc606bae9f-etc-machine-id\") pod \"86748a4d-01c0-4825-b77f-0ffc606bae9f\" (UID: \"86748a4d-01c0-4825-b77f-0ffc606bae9f\") " Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.086018 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86748a4d-01c0-4825-b77f-0ffc606bae9f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86748a4d-01c0-4825-b77f-0ffc606bae9f" (UID: "86748a4d-01c0-4825-b77f-0ffc606bae9f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.093520 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86748a4d-01c0-4825-b77f-0ffc606bae9f-kube-api-access-774hh" (OuterVolumeSpecName: "kube-api-access-774hh") pod "86748a4d-01c0-4825-b77f-0ffc606bae9f" (UID: "86748a4d-01c0-4825-b77f-0ffc606bae9f"). InnerVolumeSpecName "kube-api-access-774hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.094038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-scripts" (OuterVolumeSpecName: "scripts") pod "86748a4d-01c0-4825-b77f-0ffc606bae9f" (UID: "86748a4d-01c0-4825-b77f-0ffc606bae9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.094304 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86748a4d-01c0-4825-b77f-0ffc606bae9f" (UID: "86748a4d-01c0-4825-b77f-0ffc606bae9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.130586 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86748a4d-01c0-4825-b77f-0ffc606bae9f" (UID: "86748a4d-01c0-4825-b77f-0ffc606bae9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.177169 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data" (OuterVolumeSpecName: "config-data") pod "86748a4d-01c0-4825-b77f-0ffc606bae9f" (UID: "86748a4d-01c0-4825-b77f-0ffc606bae9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.187665 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.187705 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.187718 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.187732 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-774hh\" (UniqueName: \"kubernetes.io/projected/86748a4d-01c0-4825-b77f-0ffc606bae9f-kube-api-access-774hh\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.187745 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86748a4d-01c0-4825-b77f-0ffc606bae9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.187756 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86748a4d-01c0-4825-b77f-0ffc606bae9f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.670240 4858 generic.go:334] "Generic (PLEG): container finished" podID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerID="0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9" exitCode=0 Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.670291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86748a4d-01c0-4825-b77f-0ffc606bae9f","Type":"ContainerDied","Data":"0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9"} Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.670340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86748a4d-01c0-4825-b77f-0ffc606bae9f","Type":"ContainerDied","Data":"55952b35ff26728be8b3f696e89ba388705a42f78b0ea26efce03dda70ca803e"} Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.670362 4858 scope.go:117] "RemoveContainer" containerID="b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.670592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.720278 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.733039 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.739123 4858 scope.go:117] "RemoveContainer" containerID="0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.740887 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:42 crc kubenswrapper[4858]: E1122 09:25:42.741606 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="probe" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.741643 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="probe" Nov 22 09:25:42 crc kubenswrapper[4858]: E1122 09:25:42.741680 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="cinder-scheduler" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.741689 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="cinder-scheduler" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.741887 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="cinder-scheduler" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.741911 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" containerName="probe" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.742911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.746857 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.754038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.779908 4858 scope.go:117] "RemoveContainer" containerID="b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d" Nov 22 09:25:42 crc kubenswrapper[4858]: E1122 09:25:42.782359 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d\": container with ID starting with b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d not found: ID does not exist" containerID="b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.782392 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d"} err="failed to get container status \"b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d\": rpc error: code = NotFound desc = could not find container \"b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d\": container with ID starting with b1e0fe370cc49dd36db7bbf10e003aceb20863457357e979300ba986a11fbd3d not found: ID does not exist" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.782412 4858 scope.go:117] "RemoveContainer" containerID="0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9" Nov 22 09:25:42 crc kubenswrapper[4858]: E1122 09:25:42.782723 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9\": container with ID starting with 0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9 not found: ID does not exist" containerID="0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.782740 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9"} err="failed to get container status \"0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9\": rpc error: code = NotFound desc = could not find container \"0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9\": container with ID starting with 0fce450a818d54c5177b6ea3fa36cbf0fb3090aae4cb06047b518de9e7078dd9 not found: ID does not exist" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.903060 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.903156 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.903426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.903514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljr6l\" (UniqueName: \"kubernetes.io/projected/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-kube-api-access-ljr6l\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.903612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:42 crc kubenswrapper[4858]: I1122 09:25:42.903717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.004903 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.005307 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.005480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.004985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.005742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.005872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljr6l\" (UniqueName: \"kubernetes.io/projected/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-kube-api-access-ljr6l\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.006005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.010184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.010284 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.011457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.026599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.030225 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljr6l\" (UniqueName: \"kubernetes.io/projected/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-kube-api-access-ljr6l\") pod \"cinder-scheduler-0\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.079578 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.562530 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86748a4d-01c0-4825-b77f-0ffc606bae9f" path="/var/lib/kubelet/pods/86748a4d-01c0-4825-b77f-0ffc606bae9f/volumes" Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.563781 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:25:43 crc kubenswrapper[4858]: W1122 09:25:43.564529 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd53819e9_9206_49f4_a1a7_2d9459fcc7c7.slice/crio-a92c87717e1b3d301360396f4d4f5e7faf34a81696ea50b685f7f944d84c09ee WatchSource:0}: Error finding container a92c87717e1b3d301360396f4d4f5e7faf34a81696ea50b685f7f944d84c09ee: Status 404 returned error can't find the container with id a92c87717e1b3d301360396f4d4f5e7faf34a81696ea50b685f7f944d84c09ee Nov 22 09:25:43 crc kubenswrapper[4858]: I1122 09:25:43.685860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d53819e9-9206-49f4-a1a7-2d9459fcc7c7","Type":"ContainerStarted","Data":"a92c87717e1b3d301360396f4d4f5e7faf34a81696ea50b685f7f944d84c09ee"} Nov 22 09:25:44 crc kubenswrapper[4858]: I1122 09:25:44.709077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d53819e9-9206-49f4-a1a7-2d9459fcc7c7","Type":"ContainerStarted","Data":"44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513"} Nov 22 09:25:44 crc kubenswrapper[4858]: I1122 09:25:44.709419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d53819e9-9206-49f4-a1a7-2d9459fcc7c7","Type":"ContainerStarted","Data":"ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650"} Nov 22 09:25:44 crc kubenswrapper[4858]: I1122 09:25:44.738252 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.738235049 podStartE2EDuration="2.738235049s" podCreationTimestamp="2025-11-22 09:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:25:44.735980057 +0000 UTC m=+8106.577403073" watchObservedRunningTime="2025-11-22 09:25:44.738235049 +0000 UTC m=+8106.579658045" Nov 22 09:25:47 crc kubenswrapper[4858]: I1122 09:25:47.773142 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 09:25:48 crc kubenswrapper[4858]: I1122 09:25:48.080643 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 09:25:49 crc kubenswrapper[4858]: I1122 09:25:49.543592 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:25:49 crc kubenswrapper[4858]: E1122 09:25:49.544189 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:25:53 crc kubenswrapper[4858]: I1122 09:25:53.301622 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.588710 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xg2mb"] Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.590290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.606710 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xg2mb"] Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.688601 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-034d-account-create-97dqn"] Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.689783 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.691812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.707405 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-034d-account-create-97dqn"] Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.732812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef3e476-7f8b-4182-95c6-dd9877b2416a-operator-scripts\") pod \"glance-db-create-xg2mb\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.732896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ppf\" (UniqueName: \"kubernetes.io/projected/eef3e476-7f8b-4182-95c6-dd9877b2416a-kube-api-access-k9ppf\") pod \"glance-db-create-xg2mb\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.834401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d4b2abf-6340-4188-bafd-45a37bf1b49f-operator-scripts\") pod \"glance-034d-account-create-97dqn\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.834524 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef3e476-7f8b-4182-95c6-dd9877b2416a-operator-scripts\") pod \"glance-db-create-xg2mb\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.834567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqhs\" (UniqueName: \"kubernetes.io/projected/0d4b2abf-6340-4188-bafd-45a37bf1b49f-kube-api-access-2tqhs\") pod \"glance-034d-account-create-97dqn\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.834640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ppf\" (UniqueName: \"kubernetes.io/projected/eef3e476-7f8b-4182-95c6-dd9877b2416a-kube-api-access-k9ppf\") pod \"glance-db-create-xg2mb\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.835255 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef3e476-7f8b-4182-95c6-dd9877b2416a-operator-scripts\") pod \"glance-db-create-xg2mb\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.855642 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ppf\" (UniqueName: \"kubernetes.io/projected/eef3e476-7f8b-4182-95c6-dd9877b2416a-kube-api-access-k9ppf\") pod \"glance-db-create-xg2mb\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.919801 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.936772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d4b2abf-6340-4188-bafd-45a37bf1b49f-operator-scripts\") pod \"glance-034d-account-create-97dqn\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.936895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqhs\" (UniqueName: \"kubernetes.io/projected/0d4b2abf-6340-4188-bafd-45a37bf1b49f-kube-api-access-2tqhs\") pod \"glance-034d-account-create-97dqn\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.938076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d4b2abf-6340-4188-bafd-45a37bf1b49f-operator-scripts\") pod \"glance-034d-account-create-97dqn\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:54 crc kubenswrapper[4858]: I1122 09:25:54.964760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqhs\" (UniqueName: \"kubernetes.io/projected/0d4b2abf-6340-4188-bafd-45a37bf1b49f-kube-api-access-2tqhs\") pod \"glance-034d-account-create-97dqn\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.011625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.400382 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xg2mb"] Nov 22 09:25:55 crc kubenswrapper[4858]: W1122 09:25:55.406467 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef3e476_7f8b_4182_95c6_dd9877b2416a.slice/crio-a85ab37370565ee73998e5e3895bdb1365d24d8d0ee30cecf074a9ad3085b5ca WatchSource:0}: Error finding container a85ab37370565ee73998e5e3895bdb1365d24d8d0ee30cecf074a9ad3085b5ca: Status 404 returned error can't find the container with id a85ab37370565ee73998e5e3895bdb1365d24d8d0ee30cecf074a9ad3085b5ca Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.546926 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-034d-account-create-97dqn"] Nov 22 09:25:55 crc kubenswrapper[4858]: W1122 09:25:55.548755 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d4b2abf_6340_4188_bafd_45a37bf1b49f.slice/crio-1dfb87d3350a3b06c8caca0db63aebac4b44e764cdc7819cc2e7df07f3d842c6 WatchSource:0}: Error finding container 1dfb87d3350a3b06c8caca0db63aebac4b44e764cdc7819cc2e7df07f3d842c6: Status 404 returned error can't find the container with id 1dfb87d3350a3b06c8caca0db63aebac4b44e764cdc7819cc2e7df07f3d842c6 Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.809124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xg2mb" event={"ID":"eef3e476-7f8b-4182-95c6-dd9877b2416a","Type":"ContainerStarted","Data":"91f9622b223adaca3b33950c0f7e0f897a2452d73cb4562eab021629c98ec8c4"} Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.809470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xg2mb" event={"ID":"eef3e476-7f8b-4182-95c6-dd9877b2416a","Type":"ContainerStarted","Data":"a85ab37370565ee73998e5e3895bdb1365d24d8d0ee30cecf074a9ad3085b5ca"} Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.811028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-034d-account-create-97dqn" event={"ID":"0d4b2abf-6340-4188-bafd-45a37bf1b49f","Type":"ContainerStarted","Data":"092e0687587aa39bb5e015bb8950c121b83412c6e819836040b6de00604dd898"} Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.811083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-034d-account-create-97dqn" event={"ID":"0d4b2abf-6340-4188-bafd-45a37bf1b49f","Type":"ContainerStarted","Data":"1dfb87d3350a3b06c8caca0db63aebac4b44e764cdc7819cc2e7df07f3d842c6"} Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.826574 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-xg2mb" podStartSLOduration=1.826555194 podStartE2EDuration="1.826555194s" podCreationTimestamp="2025-11-22 09:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:25:55.823742674 +0000 UTC m=+8117.665165690" watchObservedRunningTime="2025-11-22 09:25:55.826555194 +0000 UTC m=+8117.667978200" Nov 22 09:25:55 crc kubenswrapper[4858]: I1122 09:25:55.838845 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-034d-account-create-97dqn" podStartSLOduration=1.838827447 podStartE2EDuration="1.838827447s" podCreationTimestamp="2025-11-22 09:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:25:55.836557885 +0000 UTC m=+8117.677980891" watchObservedRunningTime="2025-11-22 09:25:55.838827447 +0000 UTC m=+8117.680250453" Nov 22 09:25:56 crc kubenswrapper[4858]: I1122 09:25:56.824498 4858 generic.go:334] "Generic (PLEG): container finished" podID="eef3e476-7f8b-4182-95c6-dd9877b2416a" containerID="91f9622b223adaca3b33950c0f7e0f897a2452d73cb4562eab021629c98ec8c4" exitCode=0 Nov 22 09:25:56 crc kubenswrapper[4858]: I1122 09:25:56.824578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xg2mb" event={"ID":"eef3e476-7f8b-4182-95c6-dd9877b2416a","Type":"ContainerDied","Data":"91f9622b223adaca3b33950c0f7e0f897a2452d73cb4562eab021629c98ec8c4"} Nov 22 09:25:56 crc kubenswrapper[4858]: I1122 09:25:56.826881 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d4b2abf-6340-4188-bafd-45a37bf1b49f" containerID="092e0687587aa39bb5e015bb8950c121b83412c6e819836040b6de00604dd898" exitCode=0 Nov 22 09:25:56 crc kubenswrapper[4858]: I1122 09:25:56.826947 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-034d-account-create-97dqn" event={"ID":"0d4b2abf-6340-4188-bafd-45a37bf1b49f","Type":"ContainerDied","Data":"092e0687587aa39bb5e015bb8950c121b83412c6e819836040b6de00604dd898"} Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.200446 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.207128 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.400713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d4b2abf-6340-4188-bafd-45a37bf1b49f-operator-scripts\") pod \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.400867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tqhs\" (UniqueName: \"kubernetes.io/projected/0d4b2abf-6340-4188-bafd-45a37bf1b49f-kube-api-access-2tqhs\") pod \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\" (UID: \"0d4b2abf-6340-4188-bafd-45a37bf1b49f\") " Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.400942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef3e476-7f8b-4182-95c6-dd9877b2416a-operator-scripts\") pod \"eef3e476-7f8b-4182-95c6-dd9877b2416a\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.401049 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ppf\" (UniqueName: \"kubernetes.io/projected/eef3e476-7f8b-4182-95c6-dd9877b2416a-kube-api-access-k9ppf\") pod \"eef3e476-7f8b-4182-95c6-dd9877b2416a\" (UID: \"eef3e476-7f8b-4182-95c6-dd9877b2416a\") " Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.401257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4b2abf-6340-4188-bafd-45a37bf1b49f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d4b2abf-6340-4188-bafd-45a37bf1b49f" (UID: "0d4b2abf-6340-4188-bafd-45a37bf1b49f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.401851 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d4b2abf-6340-4188-bafd-45a37bf1b49f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.402283 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef3e476-7f8b-4182-95c6-dd9877b2416a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eef3e476-7f8b-4182-95c6-dd9877b2416a" (UID: "eef3e476-7f8b-4182-95c6-dd9877b2416a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.406902 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef3e476-7f8b-4182-95c6-dd9877b2416a-kube-api-access-k9ppf" (OuterVolumeSpecName: "kube-api-access-k9ppf") pod "eef3e476-7f8b-4182-95c6-dd9877b2416a" (UID: "eef3e476-7f8b-4182-95c6-dd9877b2416a"). InnerVolumeSpecName "kube-api-access-k9ppf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.407029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4b2abf-6340-4188-bafd-45a37bf1b49f-kube-api-access-2tqhs" (OuterVolumeSpecName: "kube-api-access-2tqhs") pod "0d4b2abf-6340-4188-bafd-45a37bf1b49f" (UID: "0d4b2abf-6340-4188-bafd-45a37bf1b49f"). InnerVolumeSpecName "kube-api-access-2tqhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.503240 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tqhs\" (UniqueName: \"kubernetes.io/projected/0d4b2abf-6340-4188-bafd-45a37bf1b49f-kube-api-access-2tqhs\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.503276 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef3e476-7f8b-4182-95c6-dd9877b2416a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.503286 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9ppf\" (UniqueName: \"kubernetes.io/projected/eef3e476-7f8b-4182-95c6-dd9877b2416a-kube-api-access-k9ppf\") on node \"crc\" DevicePath \"\"" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.844110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xg2mb" event={"ID":"eef3e476-7f8b-4182-95c6-dd9877b2416a","Type":"ContainerDied","Data":"a85ab37370565ee73998e5e3895bdb1365d24d8d0ee30cecf074a9ad3085b5ca"} Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.844156 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a85ab37370565ee73998e5e3895bdb1365d24d8d0ee30cecf074a9ad3085b5ca" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.844136 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xg2mb" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.847647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-034d-account-create-97dqn" event={"ID":"0d4b2abf-6340-4188-bafd-45a37bf1b49f","Type":"ContainerDied","Data":"1dfb87d3350a3b06c8caca0db63aebac4b44e764cdc7819cc2e7df07f3d842c6"} Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.847684 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dfb87d3350a3b06c8caca0db63aebac4b44e764cdc7819cc2e7df07f3d842c6" Nov 22 09:25:58 crc kubenswrapper[4858]: I1122 09:25:58.847715 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-034d-account-create-97dqn" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.949613 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mpk4l"] Nov 22 09:25:59 crc kubenswrapper[4858]: E1122 09:25:59.950298 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef3e476-7f8b-4182-95c6-dd9877b2416a" containerName="mariadb-database-create" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.950313 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef3e476-7f8b-4182-95c6-dd9877b2416a" containerName="mariadb-database-create" Nov 22 09:25:59 crc kubenswrapper[4858]: E1122 09:25:59.950364 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4b2abf-6340-4188-bafd-45a37bf1b49f" containerName="mariadb-account-create" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.950373 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4b2abf-6340-4188-bafd-45a37bf1b49f" containerName="mariadb-account-create" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.950581 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef3e476-7f8b-4182-95c6-dd9877b2416a" containerName="mariadb-database-create" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.950607 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4b2abf-6340-4188-bafd-45a37bf1b49f" containerName="mariadb-account-create" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.951278 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mpk4l" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.955290 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.955495 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qflq4" Nov 22 09:25:59 crc kubenswrapper[4858]: I1122 09:25:59.969763 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mpk4l"] Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.132693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-combined-ca-bundle\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.132794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9xs\" (UniqueName: \"kubernetes.io/projected/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-kube-api-access-fw9xs\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.133471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-config-data\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.133525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-db-sync-config-data\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.235883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-combined-ca-bundle\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.235981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9xs\" (UniqueName: \"kubernetes.io/projected/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-kube-api-access-fw9xs\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.236011 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-config-data\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.236379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-db-sync-config-data\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.241539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-db-sync-config-data\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.242884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-config-data\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.242912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-combined-ca-bundle\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.253253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9xs\" (UniqueName: \"kubernetes.io/projected/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-kube-api-access-fw9xs\") pod \"glance-db-sync-mpk4l\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.275144 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.852275 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mpk4l"] Nov 22 09:26:00 crc kubenswrapper[4858]: W1122 09:26:00.855456 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8fe1a0_6f1a_4ac5_b3b4_871336c73852.slice/crio-edc24b9e20a1af4f54c63dc35be2b804981e45a69a3653a86eaef57848225fc5 WatchSource:0}: Error finding container edc24b9e20a1af4f54c63dc35be2b804981e45a69a3653a86eaef57848225fc5: Status 404 returned error can't find the container with id edc24b9e20a1af4f54c63dc35be2b804981e45a69a3653a86eaef57848225fc5 Nov 22 09:26:00 crc kubenswrapper[4858]: I1122 09:26:00.866895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mpk4l" event={"ID":"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852","Type":"ContainerStarted","Data":"edc24b9e20a1af4f54c63dc35be2b804981e45a69a3653a86eaef57848225fc5"} Nov 22 09:26:03 crc kubenswrapper[4858]: I1122 09:26:03.535816 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:26:03 crc kubenswrapper[4858]: E1122 09:26:03.536454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:26:16 crc kubenswrapper[4858]: I1122 09:26:16.535709 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:26:16 crc kubenswrapper[4858]: E1122 09:26:16.538503 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:26:21 crc kubenswrapper[4858]: E1122 09:26:21.045448 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:87d86758a49b8425a546c66207f21761" Nov 22 09:26:21 crc kubenswrapper[4858]: E1122 09:26:21.045981 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:87d86758a49b8425a546c66207f21761" Nov 22 09:26:21 crc kubenswrapper[4858]: E1122 09:26:21.046157 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw9xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mpk4l_openstack(ac8fe1a0-6f1a-4ac5-b3b4-871336c73852): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 09:26:21 crc kubenswrapper[4858]: E1122 09:26:21.048496 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mpk4l" podUID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" Nov 22 09:26:22 crc kubenswrapper[4858]: E1122 09:26:22.045216 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:87d86758a49b8425a546c66207f21761\\\"\"" pod="openstack/glance-db-sync-mpk4l" podUID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" Nov 22 09:26:29 crc kubenswrapper[4858]: I1122 09:26:29.545131 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:26:29 crc kubenswrapper[4858]: E1122 09:26:29.546092 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:26:36 crc kubenswrapper[4858]: I1122 09:26:36.764185 4858 scope.go:117] "RemoveContainer" containerID="d25cdb2a71495a0a4e14f64b067bed818d184d245e7ba480e82f9b14c1dd8c9d" Nov 22 09:26:39 crc kubenswrapper[4858]: I1122 09:26:39.193237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mpk4l" event={"ID":"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852","Type":"ContainerStarted","Data":"6964aeee339eb70d00d139334408f099093bdc3299e0ca665e1767eaf0192b7e"} Nov 22 09:26:39 crc kubenswrapper[4858]: I1122 09:26:39.211955 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mpk4l" podStartSLOduration=3.344262688 podStartE2EDuration="40.21193041s" podCreationTimestamp="2025-11-22 09:25:59 +0000 UTC" firstStartedPulling="2025-11-22 09:26:00.857295291 +0000 UTC m=+8122.698718297" lastFinishedPulling="2025-11-22 09:26:37.724963023 +0000 UTC m=+8159.566386019" observedRunningTime="2025-11-22 09:26:39.207735366 +0000 UTC m=+8161.049158382" watchObservedRunningTime="2025-11-22 09:26:39.21193041 +0000 UTC m=+8161.053353416" Nov 22 09:26:40 crc kubenswrapper[4858]: I1122 09:26:40.535936 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:26:40 crc kubenswrapper[4858]: E1122 09:26:40.536637 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:26:42 crc kubenswrapper[4858]: I1122 09:26:42.219659 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" containerID="6964aeee339eb70d00d139334408f099093bdc3299e0ca665e1767eaf0192b7e" exitCode=0 Nov 22 09:26:42 crc kubenswrapper[4858]: I1122 09:26:42.219730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mpk4l" event={"ID":"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852","Type":"ContainerDied","Data":"6964aeee339eb70d00d139334408f099093bdc3299e0ca665e1767eaf0192b7e"} Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.597445 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.673300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-db-sync-config-data\") pod \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.673506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9xs\" (UniqueName: \"kubernetes.io/projected/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-kube-api-access-fw9xs\") pod \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.673626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-combined-ca-bundle\") pod \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.673680 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-config-data\") pod \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\" (UID: \"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852\") " Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.677985 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-kube-api-access-fw9xs" (OuterVolumeSpecName: "kube-api-access-fw9xs") pod "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" (UID: "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852"). InnerVolumeSpecName "kube-api-access-fw9xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.689842 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" (UID: "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.696693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" (UID: "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.721275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-config-data" (OuterVolumeSpecName: "config-data") pod "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" (UID: "ac8fe1a0-6f1a-4ac5-b3b4-871336c73852"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.777599 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.777634 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9xs\" (UniqueName: \"kubernetes.io/projected/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-kube-api-access-fw9xs\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.777646 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:43 crc kubenswrapper[4858]: I1122 09:26:43.777658 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.241521 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mpk4l" event={"ID":"ac8fe1a0-6f1a-4ac5-b3b4-871336c73852","Type":"ContainerDied","Data":"edc24b9e20a1af4f54c63dc35be2b804981e45a69a3653a86eaef57848225fc5"} Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.241577 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc24b9e20a1af4f54c63dc35be2b804981e45a69a3653a86eaef57848225fc5" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.241607 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mpk4l" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.558711 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:44 crc kubenswrapper[4858]: E1122 09:26:44.560217 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" containerName="glance-db-sync" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.560769 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" containerName="glance-db-sync" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.561235 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" containerName="glance-db-sync" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.562686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.576471 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.583954 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.584188 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qflq4" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.584363 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.668391 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59898c8fd7-5b692"] Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.670303 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.694686 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59898c8fd7-5b692"] Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.700756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.700812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.700838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xkqx\" (UniqueName: \"kubernetes.io/projected/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-kube-api-access-5xkqx\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.700891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-logs\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.700914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.701033 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.757911 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.759896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.764518 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.765985 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803177 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xkqx\" (UniqueName: \"kubernetes.io/projected/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-kube-api-access-5xkqx\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-logs\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-dns-svc\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjlc\" (UniqueName: \"kubernetes.io/projected/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-kube-api-access-dpjlc\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803560 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-nb\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-logs\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.803929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-sb\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.804026 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-config\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.804209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.804981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.808228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.814826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.817608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.824633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xkqx\" (UniqueName: \"kubernetes.io/projected/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-kube-api-access-5xkqx\") pod \"glance-default-external-api-0\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.905851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.905924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.905959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-dns-svc\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.905987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpjlc\" (UniqueName: \"kubernetes.io/projected/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-kube-api-access-dpjlc\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-nb\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906120 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mf2n\" (UniqueName: \"kubernetes.io/projected/e154d9b3-7bbb-41da-b987-bd74151874ad-kube-api-access-8mf2n\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-sb\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-config\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.906270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.907286 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-dns-svc\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.908282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-nb\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.909068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-sb\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.909213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-config\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.924667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpjlc\" (UniqueName: \"kubernetes.io/projected/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-kube-api-access-dpjlc\") pod \"dnsmasq-dns-59898c8fd7-5b692\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:44 crc kubenswrapper[4858]: I1122 09:26:44.941914 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.000099 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.007495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mf2n\" (UniqueName: \"kubernetes.io/projected/e154d9b3-7bbb-41da-b987-bd74151874ad-kube-api-access-8mf2n\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.007819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.007953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.008122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.008292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.008461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.009518 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.010478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.016271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.017213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.017591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.026008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mf2n\" (UniqueName: \"kubernetes.io/projected/e154d9b3-7bbb-41da-b987-bd74151874ad-kube-api-access-8mf2n\") pod \"glance-default-internal-api-0\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.107896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.516752 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.548673 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59898c8fd7-5b692"] Nov 22 09:26:45 crc kubenswrapper[4858]: W1122 09:26:45.553045 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fc9a18c_a39b_42a2_a58f_59e1b9f61185.slice/crio-469368159c105cb63b6bbfa23a5a11a8376f9cdbf362b23c46278ef12b68f421 WatchSource:0}: Error finding container 469368159c105cb63b6bbfa23a5a11a8376f9cdbf362b23c46278ef12b68f421: Status 404 returned error can't find the container with id 469368159c105cb63b6bbfa23a5a11a8376f9cdbf362b23c46278ef12b68f421 Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.685684 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:45 crc kubenswrapper[4858]: W1122 09:26:45.696610 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode154d9b3_7bbb_41da_b987_bd74151874ad.slice/crio-a21dd5a1582e32ae440b654c4f861542fba8c2053430369ccd773cc6749d84d7 WatchSource:0}: Error finding container a21dd5a1582e32ae440b654c4f861542fba8c2053430369ccd773cc6749d84d7: Status 404 returned error can't find the container with id a21dd5a1582e32ae440b654c4f861542fba8c2053430369ccd773cc6749d84d7 Nov 22 09:26:45 crc kubenswrapper[4858]: I1122 09:26:45.892734 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.282460 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerID="ff5e72b67c0a70befd2a984e8ae3507b48149f727ced8e6646f9752f717338b6" exitCode=0 Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.282597 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" event={"ID":"0fc9a18c-a39b-42a2-a58f-59e1b9f61185","Type":"ContainerDied","Data":"ff5e72b67c0a70befd2a984e8ae3507b48149f727ced8e6646f9752f717338b6"} Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.282909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" event={"ID":"0fc9a18c-a39b-42a2-a58f-59e1b9f61185","Type":"ContainerStarted","Data":"469368159c105cb63b6bbfa23a5a11a8376f9cdbf362b23c46278ef12b68f421"} Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.290789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e154d9b3-7bbb-41da-b987-bd74151874ad","Type":"ContainerStarted","Data":"a21dd5a1582e32ae440b654c4f861542fba8c2053430369ccd773cc6749d84d7"} Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.293173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7","Type":"ContainerStarted","Data":"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b"} Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.293198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7","Type":"ContainerStarted","Data":"ee6305dcd701a4a582dd293006c02540f37b2a6236f9a8c087791f4d268f706b"} Nov 22 09:26:46 crc kubenswrapper[4858]: I1122 09:26:46.781036 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.303651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7","Type":"ContainerStarted","Data":"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca"} Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.303777 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-log" containerID="cri-o://ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b" gracePeriod=30 Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.303810 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-httpd" containerID="cri-o://543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca" gracePeriod=30 Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.308673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e154d9b3-7bbb-41da-b987-bd74151874ad","Type":"ContainerStarted","Data":"60145154d6edaa673b6379978f5bbea65249a8e45ceab3f2844dd661d57522f3"} Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.308721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e154d9b3-7bbb-41da-b987-bd74151874ad","Type":"ContainerStarted","Data":"bf1d2d2cc25266e3994edd690faeb9de21e73462537d997967a5fc8695657303"} Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.308842 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-log" containerID="cri-o://bf1d2d2cc25266e3994edd690faeb9de21e73462537d997967a5fc8695657303" gracePeriod=30 Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.308928 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-httpd" containerID="cri-o://60145154d6edaa673b6379978f5bbea65249a8e45ceab3f2844dd661d57522f3" gracePeriod=30 Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.314740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" event={"ID":"0fc9a18c-a39b-42a2-a58f-59e1b9f61185","Type":"ContainerStarted","Data":"6957dc9c8aef2e348c129ca5e42c6e1460e6200e851222fbb125731d1f04550e"} Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.315558 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.345920 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.345871007 podStartE2EDuration="3.345871007s" podCreationTimestamp="2025-11-22 09:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:26:47.340440714 +0000 UTC m=+8169.181863730" watchObservedRunningTime="2025-11-22 09:26:47.345871007 +0000 UTC m=+8169.187294013" Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.379905 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" podStartSLOduration=3.379882986 podStartE2EDuration="3.379882986s" podCreationTimestamp="2025-11-22 09:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:26:47.37468958 +0000 UTC m=+8169.216112586" watchObservedRunningTime="2025-11-22 09:26:47.379882986 +0000 UTC m=+8169.221305992" Nov 22 09:26:47 crc kubenswrapper[4858]: I1122 09:26:47.399151 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.399134142 podStartE2EDuration="3.399134142s" podCreationTimestamp="2025-11-22 09:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:26:47.394902536 +0000 UTC m=+8169.236325542" watchObservedRunningTime="2025-11-22 09:26:47.399134142 +0000 UTC m=+8169.240557148" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.324018 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.324152 4858 generic.go:334] "Generic (PLEG): container finished" podID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerID="60145154d6edaa673b6379978f5bbea65249a8e45ceab3f2844dd661d57522f3" exitCode=143 Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.324746 4858 generic.go:334] "Generic (PLEG): container finished" podID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerID="bf1d2d2cc25266e3994edd690faeb9de21e73462537d997967a5fc8695657303" exitCode=143 Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.324181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e154d9b3-7bbb-41da-b987-bd74151874ad","Type":"ContainerDied","Data":"60145154d6edaa673b6379978f5bbea65249a8e45ceab3f2844dd661d57522f3"} Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.324823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e154d9b3-7bbb-41da-b987-bd74151874ad","Type":"ContainerDied","Data":"bf1d2d2cc25266e3994edd690faeb9de21e73462537d997967a5fc8695657303"} Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.327858 4858 generic.go:334] "Generic (PLEG): container finished" podID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerID="543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca" exitCode=0 Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.327892 4858 generic.go:334] "Generic (PLEG): container finished" podID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerID="ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b" exitCode=143 Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.327972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7","Type":"ContainerDied","Data":"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca"} Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.328014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7","Type":"ContainerDied","Data":"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b"} Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.328026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7","Type":"ContainerDied","Data":"ee6305dcd701a4a582dd293006c02540f37b2a6236f9a8c087791f4d268f706b"} Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.328041 4858 scope.go:117] "RemoveContainer" containerID="543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.327981 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376281 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-httpd-run\") pod \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-combined-ca-bundle\") pod \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" (UID: "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xkqx\" (UniqueName: \"kubernetes.io/projected/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-kube-api-access-5xkqx\") pod \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-logs\") pod \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-scripts\") pod \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.376822 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-config-data\") pod \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\" (UID: \"39f05d36-cd62-426a-bc5f-f9dc7b9a10f7\") " Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.377409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-logs" (OuterVolumeSpecName: "logs") pod "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" (UID: "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.378002 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.378043 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.381436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-kube-api-access-5xkqx" (OuterVolumeSpecName: "kube-api-access-5xkqx") pod "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" (UID: "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7"). InnerVolumeSpecName "kube-api-access-5xkqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.381575 4858 scope.go:117] "RemoveContainer" containerID="ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.391575 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-scripts" (OuterVolumeSpecName: "scripts") pod "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" (UID: "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.407707 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" (UID: "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.433065 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-config-data" (OuterVolumeSpecName: "config-data") pod "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" (UID: "39f05d36-cd62-426a-bc5f-f9dc7b9a10f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.481972 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xkqx\" (UniqueName: \"kubernetes.io/projected/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-kube-api-access-5xkqx\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.482002 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.482011 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.482020 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.484238 4858 scope.go:117] "RemoveContainer" containerID="543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca" Nov 22 09:26:48 crc kubenswrapper[4858]: E1122 09:26:48.484885 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca\": container with ID starting with 543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca not found: ID does not exist" containerID="543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.484918 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca"} err="failed to get container status \"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca\": rpc error: code = NotFound desc = could not find container \"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca\": container with ID starting with 543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca not found: ID does not exist" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.484940 4858 scope.go:117] "RemoveContainer" containerID="ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b" Nov 22 09:26:48 crc kubenswrapper[4858]: E1122 09:26:48.485348 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b\": container with ID starting with ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b not found: ID does not exist" containerID="ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.485366 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b"} err="failed to get container status \"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b\": rpc error: code = NotFound desc = could not find container \"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b\": container with ID starting with ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b not found: ID does not exist" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.485378 4858 scope.go:117] "RemoveContainer" containerID="543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.485702 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca"} err="failed to get container status \"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca\": rpc error: code = NotFound desc = could not find container \"543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca\": container with ID starting with 543457106fe02b478908d7090d068324f8c04e6f1aaad2ac93e60e52e7f6faca not found: ID does not exist" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.485720 4858 scope.go:117] "RemoveContainer" containerID="ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.485907 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b"} err="failed to get container status \"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b\": rpc error: code = NotFound desc = could not find container \"ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b\": container with ID starting with ff1953f5509a4eb49f98b400ca53914cdab8bd0547a1ea04aa4384776e5e004b not found: ID does not exist" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.663957 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.675687 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.692913 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:48 crc kubenswrapper[4858]: E1122 09:26:48.693307 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-log" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.693346 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-log" Nov 22 09:26:48 crc kubenswrapper[4858]: E1122 09:26:48.693376 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-httpd" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.693382 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-httpd" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.693605 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-log" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.693638 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" containerName="glance-httpd" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.694638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.697369 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.697544 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.702754 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.786445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.786604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktwq\" (UniqueName: \"kubernetes.io/projected/b2e5858c-c5a3-4ada-910a-451cae38681d-kube-api-access-fktwq\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.786651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.786925 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.787187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-logs\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.787282 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.787421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fktwq\" (UniqueName: \"kubernetes.io/projected/b2e5858c-c5a3-4ada-910a-451cae38681d-kube-api-access-fktwq\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889469 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-logs\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889510 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889540 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.889995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-logs\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.890076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.896020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.896138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.896351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.896650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:48 crc kubenswrapper[4858]: I1122 09:26:48.906537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fktwq\" (UniqueName: \"kubernetes.io/projected/b2e5858c-c5a3-4ada-910a-451cae38681d-kube-api-access-fktwq\") pod \"glance-default-external-api-0\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " pod="openstack/glance-default-external-api-0" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.035298 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.457557 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.552880 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f05d36-cd62-426a-bc5f-f9dc7b9a10f7" path="/var/lib/kubelet/pods/39f05d36-cd62-426a-bc5f-f9dc7b9a10f7/volumes" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.598984 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:26:49 crc kubenswrapper[4858]: W1122 09:26:49.606536 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e5858c_c5a3_4ada_910a_451cae38681d.slice/crio-b1ac3a35ddbed4b08026764e2cdebc03589d43682d34e69bc24b29fc715b5fbd WatchSource:0}: Error finding container b1ac3a35ddbed4b08026764e2cdebc03589d43682d34e69bc24b29fc715b5fbd: Status 404 returned error can't find the container with id b1ac3a35ddbed4b08026764e2cdebc03589d43682d34e69bc24b29fc715b5fbd Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-combined-ca-bundle\") pod \"e154d9b3-7bbb-41da-b987-bd74151874ad\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-logs\") pod \"e154d9b3-7bbb-41da-b987-bd74151874ad\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607271 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-config-data\") pod \"e154d9b3-7bbb-41da-b987-bd74151874ad\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607503 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mf2n\" (UniqueName: \"kubernetes.io/projected/e154d9b3-7bbb-41da-b987-bd74151874ad-kube-api-access-8mf2n\") pod \"e154d9b3-7bbb-41da-b987-bd74151874ad\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607594 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-httpd-run\") pod \"e154d9b3-7bbb-41da-b987-bd74151874ad\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-scripts\") pod \"e154d9b3-7bbb-41da-b987-bd74151874ad\" (UID: \"e154d9b3-7bbb-41da-b987-bd74151874ad\") " Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.607688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-logs" (OuterVolumeSpecName: "logs") pod "e154d9b3-7bbb-41da-b987-bd74151874ad" (UID: "e154d9b3-7bbb-41da-b987-bd74151874ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.608419 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.610337 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e154d9b3-7bbb-41da-b987-bd74151874ad" (UID: "e154d9b3-7bbb-41da-b987-bd74151874ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.614050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-scripts" (OuterVolumeSpecName: "scripts") pod "e154d9b3-7bbb-41da-b987-bd74151874ad" (UID: "e154d9b3-7bbb-41da-b987-bd74151874ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.615278 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e154d9b3-7bbb-41da-b987-bd74151874ad-kube-api-access-8mf2n" (OuterVolumeSpecName: "kube-api-access-8mf2n") pod "e154d9b3-7bbb-41da-b987-bd74151874ad" (UID: "e154d9b3-7bbb-41da-b987-bd74151874ad"). InnerVolumeSpecName "kube-api-access-8mf2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.637571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e154d9b3-7bbb-41da-b987-bd74151874ad" (UID: "e154d9b3-7bbb-41da-b987-bd74151874ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.664391 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-config-data" (OuterVolumeSpecName: "config-data") pod "e154d9b3-7bbb-41da-b987-bd74151874ad" (UID: "e154d9b3-7bbb-41da-b987-bd74151874ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.709757 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.709796 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.709810 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e154d9b3-7bbb-41da-b987-bd74151874ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.709819 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mf2n\" (UniqueName: \"kubernetes.io/projected/e154d9b3-7bbb-41da-b987-bd74151874ad-kube-api-access-8mf2n\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:49 crc kubenswrapper[4858]: I1122 09:26:49.709827 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e154d9b3-7bbb-41da-b987-bd74151874ad-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.352641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2e5858c-c5a3-4ada-910a-451cae38681d","Type":"ContainerStarted","Data":"341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7"} Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.353980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2e5858c-c5a3-4ada-910a-451cae38681d","Type":"ContainerStarted","Data":"b1ac3a35ddbed4b08026764e2cdebc03589d43682d34e69bc24b29fc715b5fbd"} Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.358909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e154d9b3-7bbb-41da-b987-bd74151874ad","Type":"ContainerDied","Data":"a21dd5a1582e32ae440b654c4f861542fba8c2053430369ccd773cc6749d84d7"} Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.358950 4858 scope.go:117] "RemoveContainer" containerID="60145154d6edaa673b6379978f5bbea65249a8e45ceab3f2844dd661d57522f3" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.358979 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.393540 4858 scope.go:117] "RemoveContainer" containerID="bf1d2d2cc25266e3994edd690faeb9de21e73462537d997967a5fc8695657303" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.400968 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.417845 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.431121 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:50 crc kubenswrapper[4858]: E1122 09:26:50.431573 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-httpd" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.431590 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-httpd" Nov 22 09:26:50 crc kubenswrapper[4858]: E1122 09:26:50.431611 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-log" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.431617 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-log" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.431783 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-log" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.431806 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" containerName="glance-httpd" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.432841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.436078 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.437376 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.441351 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.524885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.524940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.524964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.524991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.525267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.525398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvts\" (UniqueName: \"kubernetes.io/projected/e4d7f370-0917-41d8-99eb-9995b65aa253-kube-api-access-fjvts\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.525551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvts\" (UniqueName: \"kubernetes.io/projected/e4d7f370-0917-41d8-99eb-9995b65aa253-kube-api-access-fjvts\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.627890 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.629889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.630744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.632797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.635622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.640224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.643059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.656795 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvts\" (UniqueName: \"kubernetes.io/projected/e4d7f370-0917-41d8-99eb-9995b65aa253-kube-api-access-fjvts\") pod \"glance-default-internal-api-0\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:26:50 crc kubenswrapper[4858]: I1122 09:26:50.763846 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:26:51 crc kubenswrapper[4858]: I1122 09:26:51.308285 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:26:51 crc kubenswrapper[4858]: I1122 09:26:51.370971 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4d7f370-0917-41d8-99eb-9995b65aa253","Type":"ContainerStarted","Data":"26bb6b3d70431d043f9b2f2b2ee59d1bf59c9a5fb9fb787766bbbc2d08ec5fd4"} Nov 22 09:26:51 crc kubenswrapper[4858]: I1122 09:26:51.374170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2e5858c-c5a3-4ada-910a-451cae38681d","Type":"ContainerStarted","Data":"d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a"} Nov 22 09:26:51 crc kubenswrapper[4858]: I1122 09:26:51.396420 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.396403555 podStartE2EDuration="3.396403555s" podCreationTimestamp="2025-11-22 09:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:26:51.390813746 +0000 UTC m=+8173.232236782" watchObservedRunningTime="2025-11-22 09:26:51.396403555 +0000 UTC m=+8173.237826561" Nov 22 09:26:51 crc kubenswrapper[4858]: I1122 09:26:51.551804 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e154d9b3-7bbb-41da-b987-bd74151874ad" path="/var/lib/kubelet/pods/e154d9b3-7bbb-41da-b987-bd74151874ad/volumes" Nov 22 09:26:52 crc kubenswrapper[4858]: I1122 09:26:52.382691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4d7f370-0917-41d8-99eb-9995b65aa253","Type":"ContainerStarted","Data":"060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92"} Nov 22 09:26:52 crc kubenswrapper[4858]: I1122 09:26:52.382974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4d7f370-0917-41d8-99eb-9995b65aa253","Type":"ContainerStarted","Data":"62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db"} Nov 22 09:26:52 crc kubenswrapper[4858]: I1122 09:26:52.404373 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.404353652 podStartE2EDuration="2.404353652s" podCreationTimestamp="2025-11-22 09:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:26:52.402454411 +0000 UTC m=+8174.243877437" watchObservedRunningTime="2025-11-22 09:26:52.404353652 +0000 UTC m=+8174.245776688" Nov 22 09:26:52 crc kubenswrapper[4858]: I1122 09:26:52.536211 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:26:52 crc kubenswrapper[4858]: E1122 09:26:52.536477 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.002567 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.072174 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f8d9dc987-64mzw"] Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.072495 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerName="dnsmasq-dns" containerID="cri-o://8d43deedb4c57520b219394fc169e7a6a3cddc5fec6e431eec98196831db3c77" gracePeriod=10 Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.420044 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerID="8d43deedb4c57520b219394fc169e7a6a3cddc5fec6e431eec98196831db3c77" exitCode=0 Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.420109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" event={"ID":"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c","Type":"ContainerDied","Data":"8d43deedb4c57520b219394fc169e7a6a3cddc5fec6e431eec98196831db3c77"} Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.597328 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.734091 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-sb\") pod \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.734175 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ttsx\" (UniqueName: \"kubernetes.io/projected/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-kube-api-access-4ttsx\") pod \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.734232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-dns-svc\") pod \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.734351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-config\") pod \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.734403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-nb\") pod \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\" (UID: \"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c\") " Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.751881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-kube-api-access-4ttsx" (OuterVolumeSpecName: "kube-api-access-4ttsx") pod "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" (UID: "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c"). InnerVolumeSpecName "kube-api-access-4ttsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.782636 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" (UID: "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.787875 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-config" (OuterVolumeSpecName: "config") pod "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" (UID: "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.789217 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" (UID: "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.789947 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" (UID: "6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.838301 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ttsx\" (UniqueName: \"kubernetes.io/projected/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-kube-api-access-4ttsx\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.838556 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.838571 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.838586 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:55 crc kubenswrapper[4858]: I1122 09:26:55.838596 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:56 crc kubenswrapper[4858]: I1122 09:26:56.435120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" event={"ID":"6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c","Type":"ContainerDied","Data":"8825f1a7132c7a952487fbb8d881f7402fc8e7a3b8c9236287a19b86b27797bb"} Nov 22 09:26:56 crc kubenswrapper[4858]: I1122 09:26:56.435223 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f8d9dc987-64mzw" Nov 22 09:26:56 crc kubenswrapper[4858]: I1122 09:26:56.436566 4858 scope.go:117] "RemoveContainer" containerID="8d43deedb4c57520b219394fc169e7a6a3cddc5fec6e431eec98196831db3c77" Nov 22 09:26:56 crc kubenswrapper[4858]: I1122 09:26:56.479927 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f8d9dc987-64mzw"] Nov 22 09:26:56 crc kubenswrapper[4858]: I1122 09:26:56.482782 4858 scope.go:117] "RemoveContainer" containerID="924de53d8cd56034d35c472f63e003fb770a7a9ffe6b83c1b597c5c075bd3ca5" Nov 22 09:26:56 crc kubenswrapper[4858]: I1122 09:26:56.487165 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f8d9dc987-64mzw"] Nov 22 09:26:57 crc kubenswrapper[4858]: I1122 09:26:57.546807 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" path="/var/lib/kubelet/pods/6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c/volumes" Nov 22 09:26:59 crc kubenswrapper[4858]: I1122 09:26:59.035787 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 09:26:59 crc kubenswrapper[4858]: I1122 09:26:59.036663 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 09:26:59 crc kubenswrapper[4858]: I1122 09:26:59.071051 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 09:26:59 crc kubenswrapper[4858]: I1122 09:26:59.098211 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 09:26:59 crc kubenswrapper[4858]: I1122 09:26:59.467969 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 09:26:59 crc kubenswrapper[4858]: I1122 09:26:59.468018 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 09:27:00 crc kubenswrapper[4858]: I1122 09:27:00.765449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:00 crc kubenswrapper[4858]: I1122 09:27:00.766737 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:00 crc kubenswrapper[4858]: I1122 09:27:00.814657 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:00 crc kubenswrapper[4858]: I1122 09:27:00.814743 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:01 crc kubenswrapper[4858]: I1122 09:27:01.434466 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 09:27:01 crc kubenswrapper[4858]: I1122 09:27:01.437032 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 09:27:01 crc kubenswrapper[4858]: I1122 09:27:01.493411 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:01 crc kubenswrapper[4858]: I1122 09:27:01.493849 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:03 crc kubenswrapper[4858]: I1122 09:27:03.470154 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:03 crc kubenswrapper[4858]: I1122 09:27:03.485528 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 09:27:03 crc kubenswrapper[4858]: I1122 09:27:03.541131 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:27:03 crc kubenswrapper[4858]: E1122 09:27:03.542632 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.377691 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sgm8k"] Nov 22 09:27:09 crc kubenswrapper[4858]: E1122 09:27:09.378507 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerName="init" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.378519 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerName="init" Nov 22 09:27:09 crc kubenswrapper[4858]: E1122 09:27:09.378534 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerName="dnsmasq-dns" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.378541 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerName="dnsmasq-dns" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.378706 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8c44e3-2f3e-4921-b4d3-0c2fab0cc59c" containerName="dnsmasq-dns" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.379308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.385129 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sgm8k"] Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.488428 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8911-account-create-scn9n"] Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.489524 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.492073 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.513219 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8911-account-create-scn9n"] Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.520436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcdb\" (UniqueName: \"kubernetes.io/projected/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-kube-api-access-bzcdb\") pod \"placement-db-create-sgm8k\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.520594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-operator-scripts\") pod \"placement-db-create-sgm8k\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.622115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvcsh\" (UniqueName: \"kubernetes.io/projected/612fc455-b33b-48db-9146-0e99a8f7dd73-kube-api-access-rvcsh\") pod \"placement-8911-account-create-scn9n\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.622173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-operator-scripts\") pod \"placement-db-create-sgm8k\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.622252 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612fc455-b33b-48db-9146-0e99a8f7dd73-operator-scripts\") pod \"placement-8911-account-create-scn9n\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.622295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcdb\" (UniqueName: \"kubernetes.io/projected/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-kube-api-access-bzcdb\") pod \"placement-db-create-sgm8k\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.623123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-operator-scripts\") pod \"placement-db-create-sgm8k\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.642406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzcdb\" (UniqueName: \"kubernetes.io/projected/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-kube-api-access-bzcdb\") pod \"placement-db-create-sgm8k\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.705394 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.725516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612fc455-b33b-48db-9146-0e99a8f7dd73-operator-scripts\") pod \"placement-8911-account-create-scn9n\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.725647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvcsh\" (UniqueName: \"kubernetes.io/projected/612fc455-b33b-48db-9146-0e99a8f7dd73-kube-api-access-rvcsh\") pod \"placement-8911-account-create-scn9n\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.726726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612fc455-b33b-48db-9146-0e99a8f7dd73-operator-scripts\") pod \"placement-8911-account-create-scn9n\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.748367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvcsh\" (UniqueName: \"kubernetes.io/projected/612fc455-b33b-48db-9146-0e99a8f7dd73-kube-api-access-rvcsh\") pod \"placement-8911-account-create-scn9n\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:09 crc kubenswrapper[4858]: I1122 09:27:09.810169 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.235547 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sgm8k"] Nov 22 09:27:10 crc kubenswrapper[4858]: W1122 09:27:10.239636 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a86cb6_d9b2_4bcc_8c0f_3081c4c1c19e.slice/crio-8a62983c19bda1e9c91e822b93000a0a1b0021cbe0aae65683962175e9857a9a WatchSource:0}: Error finding container 8a62983c19bda1e9c91e822b93000a0a1b0021cbe0aae65683962175e9857a9a: Status 404 returned error can't find the container with id 8a62983c19bda1e9c91e822b93000a0a1b0021cbe0aae65683962175e9857a9a Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.316717 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8911-account-create-scn9n"] Nov 22 09:27:10 crc kubenswrapper[4858]: W1122 09:27:10.324223 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod612fc455_b33b_48db_9146_0e99a8f7dd73.slice/crio-af53df5123b5f059963f359f54b4dafd82d84ca40d55fc1c0bc7920411412246 WatchSource:0}: Error finding container af53df5123b5f059963f359f54b4dafd82d84ca40d55fc1c0bc7920411412246: Status 404 returned error can't find the container with id af53df5123b5f059963f359f54b4dafd82d84ca40d55fc1c0bc7920411412246 Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.571541 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8911-account-create-scn9n" event={"ID":"612fc455-b33b-48db-9146-0e99a8f7dd73","Type":"ContainerStarted","Data":"501193bba928fcf19597d861908ba432fbbe9c34e5311d048351828fa311d0cd"} Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.571588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8911-account-create-scn9n" event={"ID":"612fc455-b33b-48db-9146-0e99a8f7dd73","Type":"ContainerStarted","Data":"af53df5123b5f059963f359f54b4dafd82d84ca40d55fc1c0bc7920411412246"} Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.573542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sgm8k" event={"ID":"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e","Type":"ContainerStarted","Data":"e2c5a022611c44ce3cbed51b69c30c11acf8f759b76b14b619abf808be7ffc30"} Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.573589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sgm8k" event={"ID":"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e","Type":"ContainerStarted","Data":"8a62983c19bda1e9c91e822b93000a0a1b0021cbe0aae65683962175e9857a9a"} Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.594958 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8911-account-create-scn9n" podStartSLOduration=1.594936958 podStartE2EDuration="1.594936958s" podCreationTimestamp="2025-11-22 09:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:27:10.585083173 +0000 UTC m=+8192.426506199" watchObservedRunningTime="2025-11-22 09:27:10.594936958 +0000 UTC m=+8192.436359964" Nov 22 09:27:10 crc kubenswrapper[4858]: I1122 09:27:10.604503 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-sgm8k" podStartSLOduration=1.604479604 podStartE2EDuration="1.604479604s" podCreationTimestamp="2025-11-22 09:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:27:10.603453461 +0000 UTC m=+8192.444876467" watchObservedRunningTime="2025-11-22 09:27:10.604479604 +0000 UTC m=+8192.445902620" Nov 22 09:27:11 crc kubenswrapper[4858]: I1122 09:27:11.582305 4858 generic.go:334] "Generic (PLEG): container finished" podID="08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" containerID="e2c5a022611c44ce3cbed51b69c30c11acf8f759b76b14b619abf808be7ffc30" exitCode=0 Nov 22 09:27:11 crc kubenswrapper[4858]: I1122 09:27:11.582356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sgm8k" event={"ID":"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e","Type":"ContainerDied","Data":"e2c5a022611c44ce3cbed51b69c30c11acf8f759b76b14b619abf808be7ffc30"} Nov 22 09:27:11 crc kubenswrapper[4858]: I1122 09:27:11.584244 4858 generic.go:334] "Generic (PLEG): container finished" podID="612fc455-b33b-48db-9146-0e99a8f7dd73" containerID="501193bba928fcf19597d861908ba432fbbe9c34e5311d048351828fa311d0cd" exitCode=0 Nov 22 09:27:11 crc kubenswrapper[4858]: I1122 09:27:11.584279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8911-account-create-scn9n" event={"ID":"612fc455-b33b-48db-9146-0e99a8f7dd73","Type":"ContainerDied","Data":"501193bba928fcf19597d861908ba432fbbe9c34e5311d048351828fa311d0cd"} Nov 22 09:27:12 crc kubenswrapper[4858]: I1122 09:27:12.984939 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:12 crc kubenswrapper[4858]: I1122 09:27:12.991255 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.088335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzcdb\" (UniqueName: \"kubernetes.io/projected/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-kube-api-access-bzcdb\") pod \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.088407 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvcsh\" (UniqueName: \"kubernetes.io/projected/612fc455-b33b-48db-9146-0e99a8f7dd73-kube-api-access-rvcsh\") pod \"612fc455-b33b-48db-9146-0e99a8f7dd73\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.088493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-operator-scripts\") pod \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\" (UID: \"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e\") " Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.088578 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612fc455-b33b-48db-9146-0e99a8f7dd73-operator-scripts\") pod \"612fc455-b33b-48db-9146-0e99a8f7dd73\" (UID: \"612fc455-b33b-48db-9146-0e99a8f7dd73\") " Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.089101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" (UID: "08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.089559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612fc455-b33b-48db-9146-0e99a8f7dd73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "612fc455-b33b-48db-9146-0e99a8f7dd73" (UID: "612fc455-b33b-48db-9146-0e99a8f7dd73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.096711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612fc455-b33b-48db-9146-0e99a8f7dd73-kube-api-access-rvcsh" (OuterVolumeSpecName: "kube-api-access-rvcsh") pod "612fc455-b33b-48db-9146-0e99a8f7dd73" (UID: "612fc455-b33b-48db-9146-0e99a8f7dd73"). InnerVolumeSpecName "kube-api-access-rvcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.109909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-kube-api-access-bzcdb" (OuterVolumeSpecName: "kube-api-access-bzcdb") pod "08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" (UID: "08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e"). InnerVolumeSpecName "kube-api-access-bzcdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.190877 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzcdb\" (UniqueName: \"kubernetes.io/projected/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-kube-api-access-bzcdb\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.190924 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvcsh\" (UniqueName: \"kubernetes.io/projected/612fc455-b33b-48db-9146-0e99a8f7dd73-kube-api-access-rvcsh\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.190934 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.190942 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/612fc455-b33b-48db-9146-0e99a8f7dd73-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.608922 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sgm8k" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.608868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sgm8k" event={"ID":"08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e","Type":"ContainerDied","Data":"8a62983c19bda1e9c91e822b93000a0a1b0021cbe0aae65683962175e9857a9a"} Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.611662 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a62983c19bda1e9c91e822b93000a0a1b0021cbe0aae65683962175e9857a9a" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.612526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8911-account-create-scn9n" event={"ID":"612fc455-b33b-48db-9146-0e99a8f7dd73","Type":"ContainerDied","Data":"af53df5123b5f059963f359f54b4dafd82d84ca40d55fc1c0bc7920411412246"} Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.612550 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af53df5123b5f059963f359f54b4dafd82d84ca40d55fc1c0bc7920411412246" Nov 22 09:27:13 crc kubenswrapper[4858]: I1122 09:27:13.612607 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8911-account-create-scn9n" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.536352 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:27:14 crc kubenswrapper[4858]: E1122 09:27:14.536838 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.807984 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c4fd8b9f9-jb85q"] Nov 22 09:27:14 crc kubenswrapper[4858]: E1122 09:27:14.808721 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612fc455-b33b-48db-9146-0e99a8f7dd73" containerName="mariadb-account-create" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.808743 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="612fc455-b33b-48db-9146-0e99a8f7dd73" containerName="mariadb-account-create" Nov 22 09:27:14 crc kubenswrapper[4858]: E1122 09:27:14.808785 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" containerName="mariadb-database-create" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.808792 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" containerName="mariadb-database-create" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.809058 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="612fc455-b33b-48db-9146-0e99a8f7dd73" containerName="mariadb-account-create" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.809072 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" containerName="mariadb-database-create" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.810794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.823815 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c4fd8b9f9-jb85q"] Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.874304 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dtwr9"] Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.876050 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.880728 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-s4b97" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.880789 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.880974 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.886595 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dtwr9"] Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.920921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-config\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbzx\" (UniqueName: \"kubernetes.io/projected/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-kube-api-access-ssbzx\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jlx\" (UniqueName: \"kubernetes.io/projected/16e60d88-c1dc-4437-8156-2fa02492e68d-kube-api-access-j9jlx\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-dns-svc\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-config-data\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-combined-ca-bundle\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921538 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e60d88-c1dc-4437-8156-2fa02492e68d-logs\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:14 crc kubenswrapper[4858]: I1122 09:27:14.921655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-scripts\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e60d88-c1dc-4437-8156-2fa02492e68d-logs\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-scripts\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-config\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9jlx\" (UniqueName: \"kubernetes.io/projected/16e60d88-c1dc-4437-8156-2fa02492e68d-kube-api-access-j9jlx\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssbzx\" (UniqueName: \"kubernetes.io/projected/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-kube-api-access-ssbzx\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-dns-svc\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-config-data\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-combined-ca-bundle\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.025054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.023757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e60d88-c1dc-4437-8156-2fa02492e68d-logs\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.025505 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.026062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-dns-svc\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.027228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-config\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.030737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-scripts\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.033195 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-config-data\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.033350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-combined-ca-bundle\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.045203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssbzx\" (UniqueName: \"kubernetes.io/projected/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-kube-api-access-ssbzx\") pod \"dnsmasq-dns-5c4fd8b9f9-jb85q\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.050991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9jlx\" (UniqueName: \"kubernetes.io/projected/16e60d88-c1dc-4437-8156-2fa02492e68d-kube-api-access-j9jlx\") pod \"placement-db-sync-dtwr9\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.132046 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.200157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.698955 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c4fd8b9f9-jb85q"] Nov 22 09:27:15 crc kubenswrapper[4858]: I1122 09:27:15.745242 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dtwr9"] Nov 22 09:27:16 crc kubenswrapper[4858]: I1122 09:27:16.691589 4858 generic.go:334] "Generic (PLEG): container finished" podID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerID="bcfb15f6d2bcb0fa633452f32421f7e46daee59e27b0fb89f145eb820d736272" exitCode=0 Nov 22 09:27:16 crc kubenswrapper[4858]: I1122 09:27:16.691712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" event={"ID":"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557","Type":"ContainerDied","Data":"bcfb15f6d2bcb0fa633452f32421f7e46daee59e27b0fb89f145eb820d736272"} Nov 22 09:27:16 crc kubenswrapper[4858]: I1122 09:27:16.692097 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" event={"ID":"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557","Type":"ContainerStarted","Data":"dea8aba12961477a565e2fea4cc31614d1302595eba7792d3d9dfc16f3d0a83f"} Nov 22 09:27:16 crc kubenswrapper[4858]: I1122 09:27:16.696403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtwr9" event={"ID":"16e60d88-c1dc-4437-8156-2fa02492e68d","Type":"ContainerStarted","Data":"544fbb33d58b66eb39d5dd71d84975f5fbff6a179836a94bed6620b05544d3c5"} Nov 22 09:27:17 crc kubenswrapper[4858]: I1122 09:27:17.709614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" event={"ID":"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557","Type":"ContainerStarted","Data":"6d1e7897a4158f0fb4cea63549ba7760a73b06807cc3a1fd286ce2faf340beb2"} Nov 22 09:27:17 crc kubenswrapper[4858]: I1122 09:27:17.710078 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:17 crc kubenswrapper[4858]: I1122 09:27:17.732846 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" podStartSLOduration=3.73282834 podStartE2EDuration="3.73282834s" podCreationTimestamp="2025-11-22 09:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:27:17.728862353 +0000 UTC m=+8199.570285359" watchObservedRunningTime="2025-11-22 09:27:17.73282834 +0000 UTC m=+8199.574251346" Nov 22 09:27:19 crc kubenswrapper[4858]: I1122 09:27:19.728932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtwr9" event={"ID":"16e60d88-c1dc-4437-8156-2fa02492e68d","Type":"ContainerStarted","Data":"6c38eb81f75567f8fdaad6f8bbc06fb2f53e4c4382721b00a34f24280b871b64"} Nov 22 09:27:19 crc kubenswrapper[4858]: I1122 09:27:19.747972 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dtwr9" podStartSLOduration=2.149625403 podStartE2EDuration="5.747956509s" podCreationTimestamp="2025-11-22 09:27:14 +0000 UTC" firstStartedPulling="2025-11-22 09:27:15.744679384 +0000 UTC m=+8197.586102390" lastFinishedPulling="2025-11-22 09:27:19.34301049 +0000 UTC m=+8201.184433496" observedRunningTime="2025-11-22 09:27:19.745083017 +0000 UTC m=+8201.586506023" watchObservedRunningTime="2025-11-22 09:27:19.747956509 +0000 UTC m=+8201.589379515" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.807877 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hhtsl"] Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.809629 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.860006 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hhtsl"] Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.864846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-catalog-content\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.864956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-utilities\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.865190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjdz\" (UniqueName: \"kubernetes.io/projected/1df29910-baaa-4125-aaa9-84b0c2605fce-kube-api-access-jqjdz\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.966698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-catalog-content\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.966770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-utilities\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.966859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjdz\" (UniqueName: \"kubernetes.io/projected/1df29910-baaa-4125-aaa9-84b0c2605fce-kube-api-access-jqjdz\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.967335 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-catalog-content\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.967403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-utilities\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:20 crc kubenswrapper[4858]: I1122 09:27:20.987149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjdz\" (UniqueName: \"kubernetes.io/projected/1df29910-baaa-4125-aaa9-84b0c2605fce-kube-api-access-jqjdz\") pod \"community-operators-hhtsl\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:21 crc kubenswrapper[4858]: I1122 09:27:21.127781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:21 crc kubenswrapper[4858]: I1122 09:27:21.632307 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hhtsl"] Nov 22 09:27:21 crc kubenswrapper[4858]: I1122 09:27:21.747309 4858 generic.go:334] "Generic (PLEG): container finished" podID="16e60d88-c1dc-4437-8156-2fa02492e68d" containerID="6c38eb81f75567f8fdaad6f8bbc06fb2f53e4c4382721b00a34f24280b871b64" exitCode=0 Nov 22 09:27:21 crc kubenswrapper[4858]: I1122 09:27:21.747402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtwr9" event={"ID":"16e60d88-c1dc-4437-8156-2fa02492e68d","Type":"ContainerDied","Data":"6c38eb81f75567f8fdaad6f8bbc06fb2f53e4c4382721b00a34f24280b871b64"} Nov 22 09:27:21 crc kubenswrapper[4858]: I1122 09:27:21.749090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerStarted","Data":"d15cb524b85c4cd2c61e88d5232d1844fe7278f4df3a85894e1ebaa5fad7c867"} Nov 22 09:27:22 crc kubenswrapper[4858]: I1122 09:27:22.760153 4858 generic.go:334] "Generic (PLEG): container finished" podID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerID="54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19" exitCode=0 Nov 22 09:27:22 crc kubenswrapper[4858]: I1122 09:27:22.760237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerDied","Data":"54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19"} Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.123782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.211386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e60d88-c1dc-4437-8156-2fa02492e68d-logs\") pod \"16e60d88-c1dc-4437-8156-2fa02492e68d\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.211511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-scripts\") pod \"16e60d88-c1dc-4437-8156-2fa02492e68d\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.211543 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9jlx\" (UniqueName: \"kubernetes.io/projected/16e60d88-c1dc-4437-8156-2fa02492e68d-kube-api-access-j9jlx\") pod \"16e60d88-c1dc-4437-8156-2fa02492e68d\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.211559 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-config-data\") pod \"16e60d88-c1dc-4437-8156-2fa02492e68d\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.211590 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-combined-ca-bundle\") pod \"16e60d88-c1dc-4437-8156-2fa02492e68d\" (UID: \"16e60d88-c1dc-4437-8156-2fa02492e68d\") " Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.212001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e60d88-c1dc-4437-8156-2fa02492e68d-logs" (OuterVolumeSpecName: "logs") pod "16e60d88-c1dc-4437-8156-2fa02492e68d" (UID: "16e60d88-c1dc-4437-8156-2fa02492e68d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.216956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-scripts" (OuterVolumeSpecName: "scripts") pod "16e60d88-c1dc-4437-8156-2fa02492e68d" (UID: "16e60d88-c1dc-4437-8156-2fa02492e68d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.217026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e60d88-c1dc-4437-8156-2fa02492e68d-kube-api-access-j9jlx" (OuterVolumeSpecName: "kube-api-access-j9jlx") pod "16e60d88-c1dc-4437-8156-2fa02492e68d" (UID: "16e60d88-c1dc-4437-8156-2fa02492e68d"). InnerVolumeSpecName "kube-api-access-j9jlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.237514 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16e60d88-c1dc-4437-8156-2fa02492e68d" (UID: "16e60d88-c1dc-4437-8156-2fa02492e68d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.241297 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-config-data" (OuterVolumeSpecName: "config-data") pod "16e60d88-c1dc-4437-8156-2fa02492e68d" (UID: "16e60d88-c1dc-4437-8156-2fa02492e68d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.313289 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16e60d88-c1dc-4437-8156-2fa02492e68d-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.313338 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.313355 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9jlx\" (UniqueName: \"kubernetes.io/projected/16e60d88-c1dc-4437-8156-2fa02492e68d-kube-api-access-j9jlx\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.313365 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.313375 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e60d88-c1dc-4437-8156-2fa02492e68d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.770780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dtwr9" event={"ID":"16e60d88-c1dc-4437-8156-2fa02492e68d","Type":"ContainerDied","Data":"544fbb33d58b66eb39d5dd71d84975f5fbff6a179836a94bed6620b05544d3c5"} Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.771080 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="544fbb33d58b66eb39d5dd71d84975f5fbff6a179836a94bed6620b05544d3c5" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.770824 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dtwr9" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.772696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerStarted","Data":"c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7"} Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.863714 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74977f9d76-k6dlw"] Nov 22 09:27:23 crc kubenswrapper[4858]: E1122 09:27:23.866499 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e60d88-c1dc-4437-8156-2fa02492e68d" containerName="placement-db-sync" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.866521 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e60d88-c1dc-4437-8156-2fa02492e68d" containerName="placement-db-sync" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.866761 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e60d88-c1dc-4437-8156-2fa02492e68d" containerName="placement-db-sync" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.867972 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.870635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.870923 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.871160 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-s4b97" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.871377 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.871561 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.878814 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74977f9d76-k6dlw"] Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.926502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-public-tls-certs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.926609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9999f0-5166-4fe0-9110-374b372ff6da-logs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.926647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-internal-tls-certs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.926829 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-config-data\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.926911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fh2\" (UniqueName: \"kubernetes.io/projected/2e9999f0-5166-4fe0-9110-374b372ff6da-kube-api-access-c7fh2\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.927040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-scripts\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:23 crc kubenswrapper[4858]: I1122 09:27:23.927072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-combined-ca-bundle\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-public-tls-certs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9999f0-5166-4fe0-9110-374b372ff6da-logs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-internal-tls-certs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028537 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-config-data\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fh2\" (UniqueName: \"kubernetes.io/projected/2e9999f0-5166-4fe0-9110-374b372ff6da-kube-api-access-c7fh2\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-scripts\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.028650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-combined-ca-bundle\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.029112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9999f0-5166-4fe0-9110-374b372ff6da-logs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.033512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-public-tls-certs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.034026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-config-data\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.034698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-internal-tls-certs\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.035534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-scripts\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.035598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-combined-ca-bundle\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.046130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fh2\" (UniqueName: \"kubernetes.io/projected/2e9999f0-5166-4fe0-9110-374b372ff6da-kube-api-access-c7fh2\") pod \"placement-74977f9d76-k6dlw\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.210466 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:24 crc kubenswrapper[4858]: W1122 09:27:24.728066 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e9999f0_5166_4fe0_9110_374b372ff6da.slice/crio-e5e8c52b8311cc93960187add2480971270ca82895f4ba0f72135561b2b6652a WatchSource:0}: Error finding container e5e8c52b8311cc93960187add2480971270ca82895f4ba0f72135561b2b6652a: Status 404 returned error can't find the container with id e5e8c52b8311cc93960187add2480971270ca82895f4ba0f72135561b2b6652a Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.732518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74977f9d76-k6dlw"] Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.826910 4858 generic.go:334] "Generic (PLEG): container finished" podID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerID="c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7" exitCode=0 Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.827005 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerDied","Data":"c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7"} Nov 22 09:27:24 crc kubenswrapper[4858]: I1122 09:27:24.828268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74977f9d76-k6dlw" event={"ID":"2e9999f0-5166-4fe0-9110-374b372ff6da","Type":"ContainerStarted","Data":"e5e8c52b8311cc93960187add2480971270ca82895f4ba0f72135561b2b6652a"} Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.133525 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.201139 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59898c8fd7-5b692"] Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.201988 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerName="dnsmasq-dns" containerID="cri-o://6957dc9c8aef2e348c129ca5e42c6e1460e6200e851222fbb125731d1f04550e" gracePeriod=10 Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.846470 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerID="6957dc9c8aef2e348c129ca5e42c6e1460e6200e851222fbb125731d1f04550e" exitCode=0 Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.846696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" event={"ID":"0fc9a18c-a39b-42a2-a58f-59e1b9f61185","Type":"ContainerDied","Data":"6957dc9c8aef2e348c129ca5e42c6e1460e6200e851222fbb125731d1f04550e"} Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.849310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74977f9d76-k6dlw" event={"ID":"2e9999f0-5166-4fe0-9110-374b372ff6da","Type":"ContainerStarted","Data":"065e51e4b82bfd09ef58eeccb1d741e51d5167ffe8e2bd644d87495b643cbfb5"} Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.849436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74977f9d76-k6dlw" event={"ID":"2e9999f0-5166-4fe0-9110-374b372ff6da","Type":"ContainerStarted","Data":"9329c10d2543dce5392c0af5a7d61ebfe67fba02c6cbc2e7b19da53775192377"} Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.849586 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.849607 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.852259 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerStarted","Data":"b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9"} Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.873830 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74977f9d76-k6dlw" podStartSLOduration=2.8738112129999998 podStartE2EDuration="2.873811213s" podCreationTimestamp="2025-11-22 09:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:27:25.868294656 +0000 UTC m=+8207.709717672" watchObservedRunningTime="2025-11-22 09:27:25.873811213 +0000 UTC m=+8207.715234229" Nov 22 09:27:25 crc kubenswrapper[4858]: I1122 09:27:25.956809 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hhtsl" podStartSLOduration=3.177254907 podStartE2EDuration="5.956785549s" podCreationTimestamp="2025-11-22 09:27:20 +0000 UTC" firstStartedPulling="2025-11-22 09:27:22.763124923 +0000 UTC m=+8204.604547929" lastFinishedPulling="2025-11-22 09:27:25.542655565 +0000 UTC m=+8207.384078571" observedRunningTime="2025-11-22 09:27:25.949538187 +0000 UTC m=+8207.790961193" watchObservedRunningTime="2025-11-22 09:27:25.956785549 +0000 UTC m=+8207.798208555" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.292115 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.391232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpjlc\" (UniqueName: \"kubernetes.io/projected/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-kube-api-access-dpjlc\") pod \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.391358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-sb\") pod \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.391758 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-config\") pod \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.391849 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-nb\") pod \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.391893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-dns-svc\") pod \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\" (UID: \"0fc9a18c-a39b-42a2-a58f-59e1b9f61185\") " Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.403442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-kube-api-access-dpjlc" (OuterVolumeSpecName: "kube-api-access-dpjlc") pod "0fc9a18c-a39b-42a2-a58f-59e1b9f61185" (UID: "0fc9a18c-a39b-42a2-a58f-59e1b9f61185"). InnerVolumeSpecName "kube-api-access-dpjlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.449465 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fc9a18c-a39b-42a2-a58f-59e1b9f61185" (UID: "0fc9a18c-a39b-42a2-a58f-59e1b9f61185"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.463447 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fc9a18c-a39b-42a2-a58f-59e1b9f61185" (UID: "0fc9a18c-a39b-42a2-a58f-59e1b9f61185"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.478608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-config" (OuterVolumeSpecName: "config") pod "0fc9a18c-a39b-42a2-a58f-59e1b9f61185" (UID: "0fc9a18c-a39b-42a2-a58f-59e1b9f61185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.491010 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0fc9a18c-a39b-42a2-a58f-59e1b9f61185" (UID: "0fc9a18c-a39b-42a2-a58f-59e1b9f61185"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.493725 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.493754 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.493768 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.493782 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpjlc\" (UniqueName: \"kubernetes.io/projected/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-kube-api-access-dpjlc\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.493792 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fc9a18c-a39b-42a2-a58f-59e1b9f61185-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.535974 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:27:26 crc kubenswrapper[4858]: E1122 09:27:26.536448 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.863721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" event={"ID":"0fc9a18c-a39b-42a2-a58f-59e1b9f61185","Type":"ContainerDied","Data":"469368159c105cb63b6bbfa23a5a11a8376f9cdbf362b23c46278ef12b68f421"} Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.863796 4858 scope.go:117] "RemoveContainer" containerID="6957dc9c8aef2e348c129ca5e42c6e1460e6200e851222fbb125731d1f04550e" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.863801 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59898c8fd7-5b692" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.889738 4858 scope.go:117] "RemoveContainer" containerID="ff5e72b67c0a70befd2a984e8ae3507b48149f727ced8e6646f9752f717338b6" Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.910549 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59898c8fd7-5b692"] Nov 22 09:27:26 crc kubenswrapper[4858]: I1122 09:27:26.918809 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59898c8fd7-5b692"] Nov 22 09:27:27 crc kubenswrapper[4858]: I1122 09:27:27.547284 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" path="/var/lib/kubelet/pods/0fc9a18c-a39b-42a2-a58f-59e1b9f61185/volumes" Nov 22 09:27:31 crc kubenswrapper[4858]: I1122 09:27:31.129091 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:31 crc kubenswrapper[4858]: I1122 09:27:31.129725 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:31 crc kubenswrapper[4858]: I1122 09:27:31.180710 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:31 crc kubenswrapper[4858]: I1122 09:27:31.949423 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:31 crc kubenswrapper[4858]: I1122 09:27:31.994210 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hhtsl"] Nov 22 09:27:33 crc kubenswrapper[4858]: I1122 09:27:33.921351 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hhtsl" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="registry-server" containerID="cri-o://b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9" gracePeriod=2 Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.400000 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.554504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-utilities\") pod \"1df29910-baaa-4125-aaa9-84b0c2605fce\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.554566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqjdz\" (UniqueName: \"kubernetes.io/projected/1df29910-baaa-4125-aaa9-84b0c2605fce-kube-api-access-jqjdz\") pod \"1df29910-baaa-4125-aaa9-84b0c2605fce\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.554617 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-catalog-content\") pod \"1df29910-baaa-4125-aaa9-84b0c2605fce\" (UID: \"1df29910-baaa-4125-aaa9-84b0c2605fce\") " Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.555268 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-utilities" (OuterVolumeSpecName: "utilities") pod "1df29910-baaa-4125-aaa9-84b0c2605fce" (UID: "1df29910-baaa-4125-aaa9-84b0c2605fce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.559568 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df29910-baaa-4125-aaa9-84b0c2605fce-kube-api-access-jqjdz" (OuterVolumeSpecName: "kube-api-access-jqjdz") pod "1df29910-baaa-4125-aaa9-84b0c2605fce" (UID: "1df29910-baaa-4125-aaa9-84b0c2605fce"). InnerVolumeSpecName "kube-api-access-jqjdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.608438 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1df29910-baaa-4125-aaa9-84b0c2605fce" (UID: "1df29910-baaa-4125-aaa9-84b0c2605fce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.656848 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.656877 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqjdz\" (UniqueName: \"kubernetes.io/projected/1df29910-baaa-4125-aaa9-84b0c2605fce-kube-api-access-jqjdz\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.656887 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df29910-baaa-4125-aaa9-84b0c2605fce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.931613 4858 generic.go:334] "Generic (PLEG): container finished" podID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerID="b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9" exitCode=0 Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.931686 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhtsl" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.931712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerDied","Data":"b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9"} Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.932062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhtsl" event={"ID":"1df29910-baaa-4125-aaa9-84b0c2605fce","Type":"ContainerDied","Data":"d15cb524b85c4cd2c61e88d5232d1844fe7278f4df3a85894e1ebaa5fad7c867"} Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.932091 4858 scope.go:117] "RemoveContainer" containerID="b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.954952 4858 scope.go:117] "RemoveContainer" containerID="c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7" Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.968732 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hhtsl"] Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.977514 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hhtsl"] Nov 22 09:27:34 crc kubenswrapper[4858]: I1122 09:27:34.986900 4858 scope.go:117] "RemoveContainer" containerID="54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.030520 4858 scope.go:117] "RemoveContainer" containerID="b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9" Nov 22 09:27:35 crc kubenswrapper[4858]: E1122 09:27:35.031066 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9\": container with ID starting with b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9 not found: ID does not exist" containerID="b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.031183 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9"} err="failed to get container status \"b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9\": rpc error: code = NotFound desc = could not find container \"b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9\": container with ID starting with b737a5eabeaa844af94cc3687c510843afd0f18e7077f73468f07aa8cb1feba9 not found: ID does not exist" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.031270 4858 scope.go:117] "RemoveContainer" containerID="c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7" Nov 22 09:27:35 crc kubenswrapper[4858]: E1122 09:27:35.031984 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7\": container with ID starting with c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7 not found: ID does not exist" containerID="c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.032032 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7"} err="failed to get container status \"c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7\": rpc error: code = NotFound desc = could not find container \"c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7\": container with ID starting with c21bc606e5bcde6739818b17140fa250c0719d5b06b28f7e677fa5dee28582b7 not found: ID does not exist" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.032068 4858 scope.go:117] "RemoveContainer" containerID="54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19" Nov 22 09:27:35 crc kubenswrapper[4858]: E1122 09:27:35.032426 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19\": container with ID starting with 54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19 not found: ID does not exist" containerID="54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.032507 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19"} err="failed to get container status \"54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19\": rpc error: code = NotFound desc = could not find container \"54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19\": container with ID starting with 54fa9bbf039af8840038b2608e40cadf77a9d28b20fb87965ec05e8c14afda19 not found: ID does not exist" Nov 22 09:27:35 crc kubenswrapper[4858]: I1122 09:27:35.545975 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" path="/var/lib/kubelet/pods/1df29910-baaa-4125-aaa9-84b0c2605fce/volumes" Nov 22 09:27:41 crc kubenswrapper[4858]: I1122 09:27:41.536473 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:27:41 crc kubenswrapper[4858]: E1122 09:27:41.537545 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.995988 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fqsp4"] Nov 22 09:27:46 crc kubenswrapper[4858]: E1122 09:27:46.996715 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="extract-utilities" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996727 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="extract-utilities" Nov 22 09:27:46 crc kubenswrapper[4858]: E1122 09:27:46.996742 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerName="init" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996749 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerName="init" Nov 22 09:27:46 crc kubenswrapper[4858]: E1122 09:27:46.996768 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="extract-content" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996774 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="extract-content" Nov 22 09:27:46 crc kubenswrapper[4858]: E1122 09:27:46.996797 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerName="dnsmasq-dns" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996802 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerName="dnsmasq-dns" Nov 22 09:27:46 crc kubenswrapper[4858]: E1122 09:27:46.996817 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="registry-server" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996822 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="registry-server" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996982 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df29910-baaa-4125-aaa9-84b0c2605fce" containerName="registry-server" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.996997 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc9a18c-a39b-42a2-a58f-59e1b9f61185" containerName="dnsmasq-dns" Nov 22 09:27:46 crc kubenswrapper[4858]: I1122 09:27:46.998292 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.009296 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqsp4"] Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.084832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-catalog-content\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.084908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfzl\" (UniqueName: \"kubernetes.io/projected/2790e353-ce2d-438a-be9b-df0fc3da9b79-kube-api-access-2zfzl\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.084996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-utilities\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.186599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfzl\" (UniqueName: \"kubernetes.io/projected/2790e353-ce2d-438a-be9b-df0fc3da9b79-kube-api-access-2zfzl\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.186736 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-utilities\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.186934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-catalog-content\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.187268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-utilities\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.187532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-catalog-content\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.216246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfzl\" (UniqueName: \"kubernetes.io/projected/2790e353-ce2d-438a-be9b-df0fc3da9b79-kube-api-access-2zfzl\") pod \"redhat-marketplace-fqsp4\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.370464 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:47 crc kubenswrapper[4858]: I1122 09:27:47.814681 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqsp4"] Nov 22 09:27:47 crc kubenswrapper[4858]: W1122 09:27:47.820333 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2790e353_ce2d_438a_be9b_df0fc3da9b79.slice/crio-6cd85b94d052bb9c8a4080d4ce746f831538ad19734cdbc854a659f55a85ed86 WatchSource:0}: Error finding container 6cd85b94d052bb9c8a4080d4ce746f831538ad19734cdbc854a659f55a85ed86: Status 404 returned error can't find the container with id 6cd85b94d052bb9c8a4080d4ce746f831538ad19734cdbc854a659f55a85ed86 Nov 22 09:27:48 crc kubenswrapper[4858]: I1122 09:27:48.057708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqsp4" event={"ID":"2790e353-ce2d-438a-be9b-df0fc3da9b79","Type":"ContainerStarted","Data":"6cd85b94d052bb9c8a4080d4ce746f831538ad19734cdbc854a659f55a85ed86"} Nov 22 09:27:49 crc kubenswrapper[4858]: I1122 09:27:49.073971 4858 generic.go:334] "Generic (PLEG): container finished" podID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerID="76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e" exitCode=0 Nov 22 09:27:49 crc kubenswrapper[4858]: I1122 09:27:49.074027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqsp4" event={"ID":"2790e353-ce2d-438a-be9b-df0fc3da9b79","Type":"ContainerDied","Data":"76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e"} Nov 22 09:27:51 crc kubenswrapper[4858]: I1122 09:27:51.094783 4858 generic.go:334] "Generic (PLEG): container finished" podID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerID="c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0" exitCode=0 Nov 22 09:27:51 crc kubenswrapper[4858]: I1122 09:27:51.094824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqsp4" event={"ID":"2790e353-ce2d-438a-be9b-df0fc3da9b79","Type":"ContainerDied","Data":"c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0"} Nov 22 09:27:52 crc kubenswrapper[4858]: I1122 09:27:52.112025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqsp4" event={"ID":"2790e353-ce2d-438a-be9b-df0fc3da9b79","Type":"ContainerStarted","Data":"079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c"} Nov 22 09:27:52 crc kubenswrapper[4858]: I1122 09:27:52.142293 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fqsp4" podStartSLOduration=3.547700382 podStartE2EDuration="6.142277425s" podCreationTimestamp="2025-11-22 09:27:46 +0000 UTC" firstStartedPulling="2025-11-22 09:27:49.075932793 +0000 UTC m=+8230.917355799" lastFinishedPulling="2025-11-22 09:27:51.670509796 +0000 UTC m=+8233.511932842" observedRunningTime="2025-11-22 09:27:52.13368267 +0000 UTC m=+8233.975105676" watchObservedRunningTime="2025-11-22 09:27:52.142277425 +0000 UTC m=+8233.983700431" Nov 22 09:27:55 crc kubenswrapper[4858]: I1122 09:27:55.321713 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:55 crc kubenswrapper[4858]: I1122 09:27:55.324097 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:27:56 crc kubenswrapper[4858]: I1122 09:27:56.535166 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:27:56 crc kubenswrapper[4858]: E1122 09:27:56.535624 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:27:57 crc kubenswrapper[4858]: I1122 09:27:57.370861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:57 crc kubenswrapper[4858]: I1122 09:27:57.370908 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:57 crc kubenswrapper[4858]: I1122 09:27:57.440232 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:58 crc kubenswrapper[4858]: I1122 09:27:58.215399 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:27:58 crc kubenswrapper[4858]: I1122 09:27:58.263656 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqsp4"] Nov 22 09:28:00 crc kubenswrapper[4858]: I1122 09:28:00.182112 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fqsp4" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="registry-server" containerID="cri-o://079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c" gracePeriod=2 Nov 22 09:28:00 crc kubenswrapper[4858]: I1122 09:28:00.990392 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.046807 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-utilities\") pod \"2790e353-ce2d-438a-be9b-df0fc3da9b79\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.047304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zfzl\" (UniqueName: \"kubernetes.io/projected/2790e353-ce2d-438a-be9b-df0fc3da9b79-kube-api-access-2zfzl\") pod \"2790e353-ce2d-438a-be9b-df0fc3da9b79\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.047394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-catalog-content\") pod \"2790e353-ce2d-438a-be9b-df0fc3da9b79\" (UID: \"2790e353-ce2d-438a-be9b-df0fc3da9b79\") " Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.048632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-utilities" (OuterVolumeSpecName: "utilities") pod "2790e353-ce2d-438a-be9b-df0fc3da9b79" (UID: "2790e353-ce2d-438a-be9b-df0fc3da9b79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.066205 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2790e353-ce2d-438a-be9b-df0fc3da9b79" (UID: "2790e353-ce2d-438a-be9b-df0fc3da9b79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.085616 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2790e353-ce2d-438a-be9b-df0fc3da9b79-kube-api-access-2zfzl" (OuterVolumeSpecName: "kube-api-access-2zfzl") pod "2790e353-ce2d-438a-be9b-df0fc3da9b79" (UID: "2790e353-ce2d-438a-be9b-df0fc3da9b79"). InnerVolumeSpecName "kube-api-access-2zfzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.149795 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.150269 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zfzl\" (UniqueName: \"kubernetes.io/projected/2790e353-ce2d-438a-be9b-df0fc3da9b79-kube-api-access-2zfzl\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.150283 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790e353-ce2d-438a-be9b-df0fc3da9b79-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.191507 4858 generic.go:334] "Generic (PLEG): container finished" podID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerID="079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c" exitCode=0 Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.191583 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fqsp4" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.191573 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqsp4" event={"ID":"2790e353-ce2d-438a-be9b-df0fc3da9b79","Type":"ContainerDied","Data":"079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c"} Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.191645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fqsp4" event={"ID":"2790e353-ce2d-438a-be9b-df0fc3da9b79","Type":"ContainerDied","Data":"6cd85b94d052bb9c8a4080d4ce746f831538ad19734cdbc854a659f55a85ed86"} Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.191670 4858 scope.go:117] "RemoveContainer" containerID="079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.223364 4858 scope.go:117] "RemoveContainer" containerID="c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.227293 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqsp4"] Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.235696 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fqsp4"] Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.250304 4858 scope.go:117] "RemoveContainer" containerID="76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.288656 4858 scope.go:117] "RemoveContainer" containerID="079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c" Nov 22 09:28:01 crc kubenswrapper[4858]: E1122 09:28:01.289290 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c\": container with ID starting with 079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c not found: ID does not exist" containerID="079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.289358 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c"} err="failed to get container status \"079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c\": rpc error: code = NotFound desc = could not find container \"079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c\": container with ID starting with 079786c32a0975964a59b8630b7ca4707f3550aa9a92447fa2c6c9f77852d84c not found: ID does not exist" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.289391 4858 scope.go:117] "RemoveContainer" containerID="c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0" Nov 22 09:28:01 crc kubenswrapper[4858]: E1122 09:28:01.289764 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0\": container with ID starting with c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0 not found: ID does not exist" containerID="c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.289931 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0"} err="failed to get container status \"c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0\": rpc error: code = NotFound desc = could not find container \"c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0\": container with ID starting with c69e2c4bba710fde40738b48f91516aeb650ec6b8822ab45168ebd77fc8dd0f0 not found: ID does not exist" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.290074 4858 scope.go:117] "RemoveContainer" containerID="76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e" Nov 22 09:28:01 crc kubenswrapper[4858]: E1122 09:28:01.290677 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e\": container with ID starting with 76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e not found: ID does not exist" containerID="76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.290708 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e"} err="failed to get container status \"76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e\": rpc error: code = NotFound desc = could not find container \"76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e\": container with ID starting with 76eb43b312920213344b80ebe6ebd457daa5ec5d70893c7c17765117a099967e not found: ID does not exist" Nov 22 09:28:01 crc kubenswrapper[4858]: I1122 09:28:01.551419 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" path="/var/lib/kubelet/pods/2790e353-ce2d-438a-be9b-df0fc3da9b79/volumes" Nov 22 09:28:07 crc kubenswrapper[4858]: I1122 09:28:07.536082 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:28:07 crc kubenswrapper[4858]: E1122 09:28:07.536891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:28:18 crc kubenswrapper[4858]: I1122 09:28:18.535807 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.357052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"01b154540af086555e8de88df2c8cf3032eaed4484d3077288bd94301afb3099"} Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.731548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7lgn8"] Nov 22 09:28:19 crc kubenswrapper[4858]: E1122 09:28:19.732022 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="registry-server" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.732038 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="registry-server" Nov 22 09:28:19 crc kubenswrapper[4858]: E1122 09:28:19.732057 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="extract-utilities" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.732066 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="extract-utilities" Nov 22 09:28:19 crc kubenswrapper[4858]: E1122 09:28:19.732110 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="extract-content" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.732118 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="extract-content" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.732363 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2790e353-ce2d-438a-be9b-df0fc3da9b79" containerName="registry-server" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.733098 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.742633 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7lgn8"] Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.827892 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4pspr"] Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.829530 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.835737 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4pspr"] Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.908081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfp7q\" (UniqueName: \"kubernetes.io/projected/f09c5150-3176-431c-a614-589a67efffa0-kube-api-access-jfp7q\") pod \"nova-api-db-create-7lgn8\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.908297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjz9d\" (UniqueName: \"kubernetes.io/projected/ec81453b-74ee-4e47-9838-925ccbc8cace-kube-api-access-vjz9d\") pod \"nova-cell0-db-create-4pspr\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.908390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec81453b-74ee-4e47-9838-925ccbc8cace-operator-scripts\") pod \"nova-cell0-db-create-4pspr\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.908523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f09c5150-3176-431c-a614-589a67efffa0-operator-scripts\") pod \"nova-api-db-create-7lgn8\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.932065 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mrxcb"] Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.933576 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.944349 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-04c7-account-create-dlvt4"] Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.945978 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.948539 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.954746 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mrxcb"] Nov 22 09:28:19 crc kubenswrapper[4858]: I1122 09:28:19.965106 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04c7-account-create-dlvt4"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.009538 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp7q\" (UniqueName: \"kubernetes.io/projected/f09c5150-3176-431c-a614-589a67efffa0-kube-api-access-jfp7q\") pod \"nova-api-db-create-7lgn8\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.009632 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjz9d\" (UniqueName: \"kubernetes.io/projected/ec81453b-74ee-4e47-9838-925ccbc8cace-kube-api-access-vjz9d\") pod \"nova-cell0-db-create-4pspr\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.009671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec81453b-74ee-4e47-9838-925ccbc8cace-operator-scripts\") pod \"nova-cell0-db-create-4pspr\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.009710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f09c5150-3176-431c-a614-589a67efffa0-operator-scripts\") pod \"nova-api-db-create-7lgn8\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.010485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f09c5150-3176-431c-a614-589a67efffa0-operator-scripts\") pod \"nova-api-db-create-7lgn8\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.010967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec81453b-74ee-4e47-9838-925ccbc8cace-operator-scripts\") pod \"nova-cell0-db-create-4pspr\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.028709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfp7q\" (UniqueName: \"kubernetes.io/projected/f09c5150-3176-431c-a614-589a67efffa0-kube-api-access-jfp7q\") pod \"nova-api-db-create-7lgn8\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.028708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjz9d\" (UniqueName: \"kubernetes.io/projected/ec81453b-74ee-4e47-9838-925ccbc8cace-kube-api-access-vjz9d\") pod \"nova-cell0-db-create-4pspr\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.096082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.111652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6894e36-c578-4bc1-99c3-96934f75664f-operator-scripts\") pod \"nova-cell1-db-create-mrxcb\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.112048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgf2r\" (UniqueName: \"kubernetes.io/projected/088a68f9-5376-412d-a96d-8f16ecf6a850-kube-api-access-lgf2r\") pod \"nova-api-04c7-account-create-dlvt4\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.112177 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/088a68f9-5376-412d-a96d-8f16ecf6a850-operator-scripts\") pod \"nova-api-04c7-account-create-dlvt4\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.112221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vgv\" (UniqueName: \"kubernetes.io/projected/b6894e36-c578-4bc1-99c3-96934f75664f-kube-api-access-q6vgv\") pod \"nova-cell1-db-create-mrxcb\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.139427 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b969-account-create-5fcl7"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.140658 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.143122 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.145526 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.155047 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b969-account-create-5fcl7"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.213751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/088a68f9-5376-412d-a96d-8f16ecf6a850-operator-scripts\") pod \"nova-api-04c7-account-create-dlvt4\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.213800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6vgv\" (UniqueName: \"kubernetes.io/projected/b6894e36-c578-4bc1-99c3-96934f75664f-kube-api-access-q6vgv\") pod \"nova-cell1-db-create-mrxcb\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.213914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6894e36-c578-4bc1-99c3-96934f75664f-operator-scripts\") pod \"nova-cell1-db-create-mrxcb\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.213986 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgf2r\" (UniqueName: \"kubernetes.io/projected/088a68f9-5376-412d-a96d-8f16ecf6a850-kube-api-access-lgf2r\") pod \"nova-api-04c7-account-create-dlvt4\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.215578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/088a68f9-5376-412d-a96d-8f16ecf6a850-operator-scripts\") pod \"nova-api-04c7-account-create-dlvt4\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.217953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6894e36-c578-4bc1-99c3-96934f75664f-operator-scripts\") pod \"nova-cell1-db-create-mrxcb\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.242367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6vgv\" (UniqueName: \"kubernetes.io/projected/b6894e36-c578-4bc1-99c3-96934f75664f-kube-api-access-q6vgv\") pod \"nova-cell1-db-create-mrxcb\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.242416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgf2r\" (UniqueName: \"kubernetes.io/projected/088a68f9-5376-412d-a96d-8f16ecf6a850-kube-api-access-lgf2r\") pod \"nova-api-04c7-account-create-dlvt4\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.255128 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.275185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.316933 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7t5\" (UniqueName: \"kubernetes.io/projected/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-kube-api-access-5h7t5\") pod \"nova-cell0-b969-account-create-5fcl7\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.317028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-operator-scripts\") pod \"nova-cell0-b969-account-create-5fcl7\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.335253 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f3b1-account-create-sp5kt"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.336399 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.339799 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.342762 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f3b1-account-create-sp5kt"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.418846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7t5\" (UniqueName: \"kubernetes.io/projected/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-kube-api-access-5h7t5\") pod \"nova-cell0-b969-account-create-5fcl7\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.418897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-operator-scripts\") pod \"nova-cell0-b969-account-create-5fcl7\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.419623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-operator-scripts\") pod \"nova-cell0-b969-account-create-5fcl7\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.442506 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7t5\" (UniqueName: \"kubernetes.io/projected/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-kube-api-access-5h7t5\") pod \"nova-cell0-b969-account-create-5fcl7\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.520086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvx9m\" (UniqueName: \"kubernetes.io/projected/3c138cb2-52d6-4af4-947e-ab721fc2b04d-kube-api-access-lvx9m\") pod \"nova-cell1-f3b1-account-create-sp5kt\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.520279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c138cb2-52d6-4af4-947e-ab721fc2b04d-operator-scripts\") pod \"nova-cell1-f3b1-account-create-sp5kt\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.589986 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.623381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvx9m\" (UniqueName: \"kubernetes.io/projected/3c138cb2-52d6-4af4-947e-ab721fc2b04d-kube-api-access-lvx9m\") pod \"nova-cell1-f3b1-account-create-sp5kt\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.623574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c138cb2-52d6-4af4-947e-ab721fc2b04d-operator-scripts\") pod \"nova-cell1-f3b1-account-create-sp5kt\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.624526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c138cb2-52d6-4af4-947e-ab721fc2b04d-operator-scripts\") pod \"nova-cell1-f3b1-account-create-sp5kt\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.641924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvx9m\" (UniqueName: \"kubernetes.io/projected/3c138cb2-52d6-4af4-947e-ab721fc2b04d-kube-api-access-lvx9m\") pod \"nova-cell1-f3b1-account-create-sp5kt\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: W1122 09:28:20.643807 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf09c5150_3176_431c_a614_589a67efffa0.slice/crio-c1c9a15981a67b2dd6cbad49c47479c8c8a176ba42d9e532213ee369ec1d1ce1 WatchSource:0}: Error finding container c1c9a15981a67b2dd6cbad49c47479c8c8a176ba42d9e532213ee369ec1d1ce1: Status 404 returned error can't find the container with id c1c9a15981a67b2dd6cbad49c47479c8c8a176ba42d9e532213ee369ec1d1ce1 Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.645062 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7lgn8"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.653787 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.699534 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4pspr"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.804887 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mrxcb"] Nov 22 09:28:20 crc kubenswrapper[4858]: I1122 09:28:20.895271 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04c7-account-create-dlvt4"] Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.028965 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b969-account-create-5fcl7"] Nov 22 09:28:21 crc kubenswrapper[4858]: W1122 09:28:21.059876 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ca7ac5_bfc8_44b0_b0c5_7bb71992848a.slice/crio-941c626c484f6684f9dec4de35bfb4379591209f1acbaee1f0c1b9f524f5dae6 WatchSource:0}: Error finding container 941c626c484f6684f9dec4de35bfb4379591209f1acbaee1f0c1b9f524f5dae6: Status 404 returned error can't find the container with id 941c626c484f6684f9dec4de35bfb4379591209f1acbaee1f0c1b9f524f5dae6 Nov 22 09:28:21 crc kubenswrapper[4858]: W1122 09:28:21.187255 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c138cb2_52d6_4af4_947e_ab721fc2b04d.slice/crio-83306237b80c9d0643d041f000dae8c56dbb64e975af4e09b462eb3cfb58b08e WatchSource:0}: Error finding container 83306237b80c9d0643d041f000dae8c56dbb64e975af4e09b462eb3cfb58b08e: Status 404 returned error can't find the container with id 83306237b80c9d0643d041f000dae8c56dbb64e975af4e09b462eb3cfb58b08e Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.213279 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f3b1-account-create-sp5kt"] Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.372109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4pspr" event={"ID":"ec81453b-74ee-4e47-9838-925ccbc8cace","Type":"ContainerStarted","Data":"818543460bf30055b03bcfcd4e0fd8f5f8cf6c09019920f1e66c4835464c5074"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.372151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4pspr" event={"ID":"ec81453b-74ee-4e47-9838-925ccbc8cace","Type":"ContainerStarted","Data":"12a5221c43c0caf1b2c160ee2e6213d729a3365e35828cd6d8f89d0a0e109cda"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.373617 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b969-account-create-5fcl7" event={"ID":"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a","Type":"ContainerStarted","Data":"941c626c484f6684f9dec4de35bfb4379591209f1acbaee1f0c1b9f524f5dae6"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.374838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" event={"ID":"3c138cb2-52d6-4af4-947e-ab721fc2b04d","Type":"ContainerStarted","Data":"83306237b80c9d0643d041f000dae8c56dbb64e975af4e09b462eb3cfb58b08e"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.376185 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrxcb" event={"ID":"b6894e36-c578-4bc1-99c3-96934f75664f","Type":"ContainerStarted","Data":"aaaeab0be14d1b4b62adad21c31ba1af3135e33defce9771a58a19398910bf8b"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.376207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrxcb" event={"ID":"b6894e36-c578-4bc1-99c3-96934f75664f","Type":"ContainerStarted","Data":"c26c0991bade4358b015b52d60c4e54cdc36a9b0a834203b4537722938643af5"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.378040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7lgn8" event={"ID":"f09c5150-3176-431c-a614-589a67efffa0","Type":"ContainerStarted","Data":"b6b7f8952a75ad432a0641e6296003887ef48941795dc9f8214df1749acef0d4"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.378088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7lgn8" event={"ID":"f09c5150-3176-431c-a614-589a67efffa0","Type":"ContainerStarted","Data":"c1c9a15981a67b2dd6cbad49c47479c8c8a176ba42d9e532213ee369ec1d1ce1"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.379613 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04c7-account-create-dlvt4" event={"ID":"088a68f9-5376-412d-a96d-8f16ecf6a850","Type":"ContainerStarted","Data":"87f92f33e6d1e62dc706dd981728716dbf374263266ff4176ffaec2b543c49e6"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.379644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04c7-account-create-dlvt4" event={"ID":"088a68f9-5376-412d-a96d-8f16ecf6a850","Type":"ContainerStarted","Data":"dd38184d59fa8c38e9fc98d0ccc3ebfefbc920f60c486044163007594e093c49"} Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.403637 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-4pspr" podStartSLOduration=2.403619936 podStartE2EDuration="2.403619936s" podCreationTimestamp="2025-11-22 09:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:21.394709291 +0000 UTC m=+8263.236132297" watchObservedRunningTime="2025-11-22 09:28:21.403619936 +0000 UTC m=+8263.245042952" Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.415269 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-mrxcb" podStartSLOduration=2.415245858 podStartE2EDuration="2.415245858s" podCreationTimestamp="2025-11-22 09:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:21.406676294 +0000 UTC m=+8263.248099300" watchObservedRunningTime="2025-11-22 09:28:21.415245858 +0000 UTC m=+8263.256668864" Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.426114 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-04c7-account-create-dlvt4" podStartSLOduration=2.426094325 podStartE2EDuration="2.426094325s" podCreationTimestamp="2025-11-22 09:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:21.418562813 +0000 UTC m=+8263.259985819" watchObservedRunningTime="2025-11-22 09:28:21.426094325 +0000 UTC m=+8263.267517331" Nov 22 09:28:21 crc kubenswrapper[4858]: I1122 09:28:21.440540 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7lgn8" podStartSLOduration=2.440525706 podStartE2EDuration="2.440525706s" podCreationTimestamp="2025-11-22 09:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:21.433051717 +0000 UTC m=+8263.274474723" watchObservedRunningTime="2025-11-22 09:28:21.440525706 +0000 UTC m=+8263.281948712" Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.391315 4858 generic.go:334] "Generic (PLEG): container finished" podID="ec81453b-74ee-4e47-9838-925ccbc8cace" containerID="818543460bf30055b03bcfcd4e0fd8f5f8cf6c09019920f1e66c4835464c5074" exitCode=0 Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.391453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4pspr" event={"ID":"ec81453b-74ee-4e47-9838-925ccbc8cace","Type":"ContainerDied","Data":"818543460bf30055b03bcfcd4e0fd8f5f8cf6c09019920f1e66c4835464c5074"} Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.395246 4858 generic.go:334] "Generic (PLEG): container finished" podID="89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" containerID="b94c43098792e1d88c3c02839fab03f0eddb71650538bad8222d4d3fbbedd2a8" exitCode=0 Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.395314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b969-account-create-5fcl7" event={"ID":"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a","Type":"ContainerDied","Data":"b94c43098792e1d88c3c02839fab03f0eddb71650538bad8222d4d3fbbedd2a8"} Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.397607 4858 generic.go:334] "Generic (PLEG): container finished" podID="3c138cb2-52d6-4af4-947e-ab721fc2b04d" containerID="fa261e5f5e7bf93982a936f9462835ed7ef7e4000e4767a93010bc445fed7d9d" exitCode=0 Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.397715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" event={"ID":"3c138cb2-52d6-4af4-947e-ab721fc2b04d","Type":"ContainerDied","Data":"fa261e5f5e7bf93982a936f9462835ed7ef7e4000e4767a93010bc445fed7d9d"} Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.402050 4858 generic.go:334] "Generic (PLEG): container finished" podID="b6894e36-c578-4bc1-99c3-96934f75664f" containerID="aaaeab0be14d1b4b62adad21c31ba1af3135e33defce9771a58a19398910bf8b" exitCode=0 Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.402149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrxcb" event={"ID":"b6894e36-c578-4bc1-99c3-96934f75664f","Type":"ContainerDied","Data":"aaaeab0be14d1b4b62adad21c31ba1af3135e33defce9771a58a19398910bf8b"} Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.406837 4858 generic.go:334] "Generic (PLEG): container finished" podID="f09c5150-3176-431c-a614-589a67efffa0" containerID="b6b7f8952a75ad432a0641e6296003887ef48941795dc9f8214df1749acef0d4" exitCode=0 Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.406971 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7lgn8" event={"ID":"f09c5150-3176-431c-a614-589a67efffa0","Type":"ContainerDied","Data":"b6b7f8952a75ad432a0641e6296003887ef48941795dc9f8214df1749acef0d4"} Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.410419 4858 generic.go:334] "Generic (PLEG): container finished" podID="088a68f9-5376-412d-a96d-8f16ecf6a850" containerID="87f92f33e6d1e62dc706dd981728716dbf374263266ff4176ffaec2b543c49e6" exitCode=0 Nov 22 09:28:22 crc kubenswrapper[4858]: I1122 09:28:22.410501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04c7-account-create-dlvt4" event={"ID":"088a68f9-5376-412d-a96d-8f16ecf6a850","Type":"ContainerDied","Data":"87f92f33e6d1e62dc706dd981728716dbf374263266ff4176ffaec2b543c49e6"} Nov 22 09:28:23 crc kubenswrapper[4858]: I1122 09:28:23.888432 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:23 crc kubenswrapper[4858]: I1122 09:28:23.917244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6vgv\" (UniqueName: \"kubernetes.io/projected/b6894e36-c578-4bc1-99c3-96934f75664f-kube-api-access-q6vgv\") pod \"b6894e36-c578-4bc1-99c3-96934f75664f\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " Nov 22 09:28:23 crc kubenswrapper[4858]: I1122 09:28:23.918732 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6894e36-c578-4bc1-99c3-96934f75664f-operator-scripts\") pod \"b6894e36-c578-4bc1-99c3-96934f75664f\" (UID: \"b6894e36-c578-4bc1-99c3-96934f75664f\") " Nov 22 09:28:23 crc kubenswrapper[4858]: I1122 09:28:23.919949 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6894e36-c578-4bc1-99c3-96934f75664f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6894e36-c578-4bc1-99c3-96934f75664f" (UID: "b6894e36-c578-4bc1-99c3-96934f75664f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:28:23 crc kubenswrapper[4858]: I1122 09:28:23.928591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6894e36-c578-4bc1-99c3-96934f75664f-kube-api-access-q6vgv" (OuterVolumeSpecName: "kube-api-access-q6vgv") pod "b6894e36-c578-4bc1-99c3-96934f75664f" (UID: "b6894e36-c578-4bc1-99c3-96934f75664f"). InnerVolumeSpecName "kube-api-access-q6vgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.021043 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6vgv\" (UniqueName: \"kubernetes.io/projected/b6894e36-c578-4bc1-99c3-96934f75664f-kube-api-access-q6vgv\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.021095 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6894e36-c578-4bc1-99c3-96934f75664f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.088123 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.095695 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.106777 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148190 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h7t5\" (UniqueName: \"kubernetes.io/projected/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-kube-api-access-5h7t5\") pod \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148239 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f09c5150-3176-431c-a614-589a67efffa0-operator-scripts\") pod \"f09c5150-3176-431c-a614-589a67efffa0\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148399 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-operator-scripts\") pod \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\" (UID: \"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfp7q\" (UniqueName: \"kubernetes.io/projected/f09c5150-3176-431c-a614-589a67efffa0-kube-api-access-jfp7q\") pod \"f09c5150-3176-431c-a614-589a67efffa0\" (UID: \"f09c5150-3176-431c-a614-589a67efffa0\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvx9m\" (UniqueName: \"kubernetes.io/projected/3c138cb2-52d6-4af4-947e-ab721fc2b04d-kube-api-access-lvx9m\") pod \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c138cb2-52d6-4af4-947e-ab721fc2b04d-operator-scripts\") pod \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\" (UID: \"3c138cb2-52d6-4af4-947e-ab721fc2b04d\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.148909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09c5150-3176-431c-a614-589a67efffa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f09c5150-3176-431c-a614-589a67efffa0" (UID: "f09c5150-3176-431c-a614-589a67efffa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.149492 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" (UID: "89ca7ac5-bfc8-44b0-b0c5-7bb71992848a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.150642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c138cb2-52d6-4af4-947e-ab721fc2b04d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c138cb2-52d6-4af4-947e-ab721fc2b04d" (UID: "3c138cb2-52d6-4af4-947e-ab721fc2b04d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.152410 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-kube-api-access-5h7t5" (OuterVolumeSpecName: "kube-api-access-5h7t5") pod "89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" (UID: "89ca7ac5-bfc8-44b0-b0c5-7bb71992848a"). InnerVolumeSpecName "kube-api-access-5h7t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.152511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c138cb2-52d6-4af4-947e-ab721fc2b04d-kube-api-access-lvx9m" (OuterVolumeSpecName: "kube-api-access-lvx9m") pod "3c138cb2-52d6-4af4-947e-ab721fc2b04d" (UID: "3c138cb2-52d6-4af4-947e-ab721fc2b04d"). InnerVolumeSpecName "kube-api-access-lvx9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.156559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09c5150-3176-431c-a614-589a67efffa0-kube-api-access-jfp7q" (OuterVolumeSpecName: "kube-api-access-jfp7q") pod "f09c5150-3176-431c-a614-589a67efffa0" (UID: "f09c5150-3176-431c-a614-589a67efffa0"). InnerVolumeSpecName "kube-api-access-jfp7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.161905 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.165760 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.249626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgf2r\" (UniqueName: \"kubernetes.io/projected/088a68f9-5376-412d-a96d-8f16ecf6a850-kube-api-access-lgf2r\") pod \"088a68f9-5376-412d-a96d-8f16ecf6a850\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.249750 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjz9d\" (UniqueName: \"kubernetes.io/projected/ec81453b-74ee-4e47-9838-925ccbc8cace-kube-api-access-vjz9d\") pod \"ec81453b-74ee-4e47-9838-925ccbc8cace\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.249834 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec81453b-74ee-4e47-9838-925ccbc8cace-operator-scripts\") pod \"ec81453b-74ee-4e47-9838-925ccbc8cace\" (UID: \"ec81453b-74ee-4e47-9838-925ccbc8cace\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.249867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/088a68f9-5376-412d-a96d-8f16ecf6a850-operator-scripts\") pod \"088a68f9-5376-412d-a96d-8f16ecf6a850\" (UID: \"088a68f9-5376-412d-a96d-8f16ecf6a850\") " Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250192 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250211 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfp7q\" (UniqueName: \"kubernetes.io/projected/f09c5150-3176-431c-a614-589a67efffa0-kube-api-access-jfp7q\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250223 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvx9m\" (UniqueName: \"kubernetes.io/projected/3c138cb2-52d6-4af4-947e-ab721fc2b04d-kube-api-access-lvx9m\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250234 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c138cb2-52d6-4af4-947e-ab721fc2b04d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250242 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h7t5\" (UniqueName: \"kubernetes.io/projected/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a-kube-api-access-5h7t5\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250251 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f09c5150-3176-431c-a614-589a67efffa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec81453b-74ee-4e47-9838-925ccbc8cace-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec81453b-74ee-4e47-9838-925ccbc8cace" (UID: "ec81453b-74ee-4e47-9838-925ccbc8cace"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.250803 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088a68f9-5376-412d-a96d-8f16ecf6a850-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "088a68f9-5376-412d-a96d-8f16ecf6a850" (UID: "088a68f9-5376-412d-a96d-8f16ecf6a850"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.253373 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088a68f9-5376-412d-a96d-8f16ecf6a850-kube-api-access-lgf2r" (OuterVolumeSpecName: "kube-api-access-lgf2r") pod "088a68f9-5376-412d-a96d-8f16ecf6a850" (UID: "088a68f9-5376-412d-a96d-8f16ecf6a850"). InnerVolumeSpecName "kube-api-access-lgf2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.253961 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec81453b-74ee-4e47-9838-925ccbc8cace-kube-api-access-vjz9d" (OuterVolumeSpecName: "kube-api-access-vjz9d") pod "ec81453b-74ee-4e47-9838-925ccbc8cace" (UID: "ec81453b-74ee-4e47-9838-925ccbc8cace"). InnerVolumeSpecName "kube-api-access-vjz9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.351960 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec81453b-74ee-4e47-9838-925ccbc8cace-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.352021 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/088a68f9-5376-412d-a96d-8f16ecf6a850-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.352040 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgf2r\" (UniqueName: \"kubernetes.io/projected/088a68f9-5376-412d-a96d-8f16ecf6a850-kube-api-access-lgf2r\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.352061 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjz9d\" (UniqueName: \"kubernetes.io/projected/ec81453b-74ee-4e47-9838-925ccbc8cace-kube-api-access-vjz9d\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.457243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b969-account-create-5fcl7" event={"ID":"89ca7ac5-bfc8-44b0-b0c5-7bb71992848a","Type":"ContainerDied","Data":"941c626c484f6684f9dec4de35bfb4379591209f1acbaee1f0c1b9f524f5dae6"} Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.457264 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b969-account-create-5fcl7" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.457291 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="941c626c484f6684f9dec4de35bfb4379591209f1acbaee1f0c1b9f524f5dae6" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.459528 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.459546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f3b1-account-create-sp5kt" event={"ID":"3c138cb2-52d6-4af4-947e-ab721fc2b04d","Type":"ContainerDied","Data":"83306237b80c9d0643d041f000dae8c56dbb64e975af4e09b462eb3cfb58b08e"} Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.459582 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83306237b80c9d0643d041f000dae8c56dbb64e975af4e09b462eb3cfb58b08e" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.461814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mrxcb" event={"ID":"b6894e36-c578-4bc1-99c3-96934f75664f","Type":"ContainerDied","Data":"c26c0991bade4358b015b52d60c4e54cdc36a9b0a834203b4537722938643af5"} Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.461840 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mrxcb" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.461858 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26c0991bade4358b015b52d60c4e54cdc36a9b0a834203b4537722938643af5" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.463971 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7lgn8" event={"ID":"f09c5150-3176-431c-a614-589a67efffa0","Type":"ContainerDied","Data":"c1c9a15981a67b2dd6cbad49c47479c8c8a176ba42d9e532213ee369ec1d1ce1"} Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.464009 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7lgn8" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.464017 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c9a15981a67b2dd6cbad49c47479c8c8a176ba42d9e532213ee369ec1d1ce1" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.465844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04c7-account-create-dlvt4" event={"ID":"088a68f9-5376-412d-a96d-8f16ecf6a850","Type":"ContainerDied","Data":"dd38184d59fa8c38e9fc98d0ccc3ebfefbc920f60c486044163007594e093c49"} Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.465891 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd38184d59fa8c38e9fc98d0ccc3ebfefbc920f60c486044163007594e093c49" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.465950 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04c7-account-create-dlvt4" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.471183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4pspr" event={"ID":"ec81453b-74ee-4e47-9838-925ccbc8cace","Type":"ContainerDied","Data":"12a5221c43c0caf1b2c160ee2e6213d729a3365e35828cd6d8f89d0a0e109cda"} Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.471257 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12a5221c43c0caf1b2c160ee2e6213d729a3365e35828cd6d8f89d0a0e109cda" Nov 22 09:28:24 crc kubenswrapper[4858]: I1122 09:28:24.471347 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4pspr" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.426792 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbl5d"] Nov 22 09:28:25 crc kubenswrapper[4858]: E1122 09:28:25.427949 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09c5150-3176-431c-a614-589a67efffa0" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.427973 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09c5150-3176-431c-a614-589a67efffa0" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: E1122 09:28:25.427996 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088a68f9-5376-412d-a96d-8f16ecf6a850" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428008 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="088a68f9-5376-412d-a96d-8f16ecf6a850" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: E1122 09:28:25.428041 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c138cb2-52d6-4af4-947e-ab721fc2b04d" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c138cb2-52d6-4af4-947e-ab721fc2b04d" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: E1122 09:28:25.428071 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6894e36-c578-4bc1-99c3-96934f75664f" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428081 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6894e36-c578-4bc1-99c3-96934f75664f" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: E1122 09:28:25.428104 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428113 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: E1122 09:28:25.428129 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec81453b-74ee-4e47-9838-925ccbc8cace" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428138 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec81453b-74ee-4e47-9838-925ccbc8cace" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428444 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c138cb2-52d6-4af4-947e-ab721fc2b04d" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428479 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09c5150-3176-431c-a614-589a67efffa0" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428502 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec81453b-74ee-4e47-9838-925ccbc8cace" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428514 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6894e36-c578-4bc1-99c3-96934f75664f" containerName="mariadb-database-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428536 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.428568 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="088a68f9-5376-412d-a96d-8f16ecf6a850" containerName="mariadb-account-create" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.431238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.434735 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.434950 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.434971 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5nctz" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.436451 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbl5d"] Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.474150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-scripts\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.474192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8z4n\" (UniqueName: \"kubernetes.io/projected/124066f7-9a16-4d81-a897-e7b47ef06710-kube-api-access-j8z4n\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.474384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.474416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-config-data\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.576262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.576314 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-config-data\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.576434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-scripts\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.576465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8z4n\" (UniqueName: \"kubernetes.io/projected/124066f7-9a16-4d81-a897-e7b47ef06710-kube-api-access-j8z4n\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.582200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-scripts\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.582865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-config-data\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.584818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.593389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8z4n\" (UniqueName: \"kubernetes.io/projected/124066f7-9a16-4d81-a897-e7b47ef06710-kube-api-access-j8z4n\") pod \"nova-cell0-conductor-db-sync-fbl5d\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:25 crc kubenswrapper[4858]: I1122 09:28:25.751195 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:26 crc kubenswrapper[4858]: I1122 09:28:26.230823 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbl5d"] Nov 22 09:28:26 crc kubenswrapper[4858]: W1122 09:28:26.232160 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod124066f7_9a16_4d81_a897_e7b47ef06710.slice/crio-88903843a74bf9b245efaf2013de9347c5e50f309e980e2f83a2aa3adbb91731 WatchSource:0}: Error finding container 88903843a74bf9b245efaf2013de9347c5e50f309e980e2f83a2aa3adbb91731: Status 404 returned error can't find the container with id 88903843a74bf9b245efaf2013de9347c5e50f309e980e2f83a2aa3adbb91731 Nov 22 09:28:26 crc kubenswrapper[4858]: I1122 09:28:26.489483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" event={"ID":"124066f7-9a16-4d81-a897-e7b47ef06710","Type":"ContainerStarted","Data":"88903843a74bf9b245efaf2013de9347c5e50f309e980e2f83a2aa3adbb91731"} Nov 22 09:28:36 crc kubenswrapper[4858]: I1122 09:28:36.610432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" event={"ID":"124066f7-9a16-4d81-a897-e7b47ef06710","Type":"ContainerStarted","Data":"02301d2e275ba72fe89e7d7c44b31ea1dd7a3ed37c733ad98c9d4ad774b46f99"} Nov 22 09:28:36 crc kubenswrapper[4858]: I1122 09:28:36.627278 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" podStartSLOduration=2.394868101 podStartE2EDuration="11.627254312s" podCreationTimestamp="2025-11-22 09:28:25 +0000 UTC" firstStartedPulling="2025-11-22 09:28:26.234904059 +0000 UTC m=+8268.076327075" lastFinishedPulling="2025-11-22 09:28:35.46729027 +0000 UTC m=+8277.308713286" observedRunningTime="2025-11-22 09:28:36.621974353 +0000 UTC m=+8278.463397369" watchObservedRunningTime="2025-11-22 09:28:36.627254312 +0000 UTC m=+8278.468677328" Nov 22 09:28:36 crc kubenswrapper[4858]: I1122 09:28:36.966097 4858 scope.go:117] "RemoveContainer" containerID="1a09c6820aefe7f92ba89aecbe89b535908162bbe95d10862a2e0fe019fa63ba" Nov 22 09:28:36 crc kubenswrapper[4858]: I1122 09:28:36.989250 4858 scope.go:117] "RemoveContainer" containerID="a5c5e9edcb5e3de31e1285faeff69b1a4b12b015a3ebea1d3911cea5bc782e7e" Nov 22 09:28:41 crc kubenswrapper[4858]: I1122 09:28:41.675834 4858 generic.go:334] "Generic (PLEG): container finished" podID="124066f7-9a16-4d81-a897-e7b47ef06710" containerID="02301d2e275ba72fe89e7d7c44b31ea1dd7a3ed37c733ad98c9d4ad774b46f99" exitCode=0 Nov 22 09:28:41 crc kubenswrapper[4858]: I1122 09:28:41.675934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" event={"ID":"124066f7-9a16-4d81-a897-e7b47ef06710","Type":"ContainerDied","Data":"02301d2e275ba72fe89e7d7c44b31ea1dd7a3ed37c733ad98c9d4ad774b46f99"} Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.066714 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.225405 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-config-data\") pod \"124066f7-9a16-4d81-a897-e7b47ef06710\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.225539 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8z4n\" (UniqueName: \"kubernetes.io/projected/124066f7-9a16-4d81-a897-e7b47ef06710-kube-api-access-j8z4n\") pod \"124066f7-9a16-4d81-a897-e7b47ef06710\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.225622 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-scripts\") pod \"124066f7-9a16-4d81-a897-e7b47ef06710\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.225712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-combined-ca-bundle\") pod \"124066f7-9a16-4d81-a897-e7b47ef06710\" (UID: \"124066f7-9a16-4d81-a897-e7b47ef06710\") " Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.234408 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-scripts" (OuterVolumeSpecName: "scripts") pod "124066f7-9a16-4d81-a897-e7b47ef06710" (UID: "124066f7-9a16-4d81-a897-e7b47ef06710"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.235670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124066f7-9a16-4d81-a897-e7b47ef06710-kube-api-access-j8z4n" (OuterVolumeSpecName: "kube-api-access-j8z4n") pod "124066f7-9a16-4d81-a897-e7b47ef06710" (UID: "124066f7-9a16-4d81-a897-e7b47ef06710"). InnerVolumeSpecName "kube-api-access-j8z4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.251244 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-config-data" (OuterVolumeSpecName: "config-data") pod "124066f7-9a16-4d81-a897-e7b47ef06710" (UID: "124066f7-9a16-4d81-a897-e7b47ef06710"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.256158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "124066f7-9a16-4d81-a897-e7b47ef06710" (UID: "124066f7-9a16-4d81-a897-e7b47ef06710"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.327743 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8z4n\" (UniqueName: \"kubernetes.io/projected/124066f7-9a16-4d81-a897-e7b47ef06710-kube-api-access-j8z4n\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.327772 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.327784 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.327791 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124066f7-9a16-4d81-a897-e7b47ef06710-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.700545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" event={"ID":"124066f7-9a16-4d81-a897-e7b47ef06710","Type":"ContainerDied","Data":"88903843a74bf9b245efaf2013de9347c5e50f309e980e2f83a2aa3adbb91731"} Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.700639 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88903843a74bf9b245efaf2013de9347c5e50f309e980e2f83a2aa3adbb91731" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.700647 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbl5d" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.799107 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:28:43 crc kubenswrapper[4858]: E1122 09:28:43.799753 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124066f7-9a16-4d81-a897-e7b47ef06710" containerName="nova-cell0-conductor-db-sync" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.799864 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="124066f7-9a16-4d81-a897-e7b47ef06710" containerName="nova-cell0-conductor-db-sync" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.800180 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="124066f7-9a16-4d81-a897-e7b47ef06710" containerName="nova-cell0-conductor-db-sync" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.801034 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.803067 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5nctz" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.803334 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.810092 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.940076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.940161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w24b7\" (UniqueName: \"kubernetes.io/projected/f30dfd03-0897-4211-b0d7-aabfd726e408-kube-api-access-w24b7\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:43 crc kubenswrapper[4858]: I1122 09:28:43.940248 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.041799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.041893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w24b7\" (UniqueName: \"kubernetes.io/projected/f30dfd03-0897-4211-b0d7-aabfd726e408-kube-api-access-w24b7\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.042009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.045737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.045854 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.059132 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w24b7\" (UniqueName: \"kubernetes.io/projected/f30dfd03-0897-4211-b0d7-aabfd726e408-kube-api-access-w24b7\") pod \"nova-cell0-conductor-0\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.121155 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.549434 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:28:44 crc kubenswrapper[4858]: W1122 09:28:44.551233 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30dfd03_0897_4211_b0d7_aabfd726e408.slice/crio-577213e76612d6fced4a6876b82c510d41fc3c1993e4ddcb400b7891a06e38d1 WatchSource:0}: Error finding container 577213e76612d6fced4a6876b82c510d41fc3c1993e4ddcb400b7891a06e38d1: Status 404 returned error can't find the container with id 577213e76612d6fced4a6876b82c510d41fc3c1993e4ddcb400b7891a06e38d1 Nov 22 09:28:44 crc kubenswrapper[4858]: I1122 09:28:44.712650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f30dfd03-0897-4211-b0d7-aabfd726e408","Type":"ContainerStarted","Data":"577213e76612d6fced4a6876b82c510d41fc3c1993e4ddcb400b7891a06e38d1"} Nov 22 09:28:45 crc kubenswrapper[4858]: I1122 09:28:45.721567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f30dfd03-0897-4211-b0d7-aabfd726e408","Type":"ContainerStarted","Data":"f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb"} Nov 22 09:28:45 crc kubenswrapper[4858]: I1122 09:28:45.723000 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:45 crc kubenswrapper[4858]: I1122 09:28:45.753815 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.753786655 podStartE2EDuration="2.753786655s" podCreationTimestamp="2025-11-22 09:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:45.736188592 +0000 UTC m=+8287.577611608" watchObservedRunningTime="2025-11-22 09:28:45.753786655 +0000 UTC m=+8287.595209701" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.147170 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.613686 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dgz4c"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.615263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.618280 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.621336 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.630163 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dgz4c"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.732167 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.744495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.749064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.758002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-scripts\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.758078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.758108 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-config-data\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.758185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96xn\" (UniqueName: \"kubernetes.io/projected/adcfc0c9-3a34-4a25-bea5-4015b6c70880-kube-api-access-n96xn\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.765597 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.772641 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.775029 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.786546 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.831936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.859908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-config-data\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.859984 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n96xn\" (UniqueName: \"kubernetes.io/projected/adcfc0c9-3a34-4a25-bea5-4015b6c70880-kube-api-access-n96xn\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-config-data\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjxt\" (UniqueName: \"kubernetes.io/projected/39f8075d-502f-4032-a5a7-ac8fe5d447bc-kube-api-access-prjxt\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860108 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-scripts\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860163 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f8075d-502f-4032-a5a7-ac8fe5d447bc-logs\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5rt\" (UniqueName: \"kubernetes.io/projected/8c3f5090-a442-427a-a031-801ee7a96745-kube-api-access-pn5rt\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-config-data\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860224 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3f5090-a442-427a-a031-801ee7a96745-logs\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.860242 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.872699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.872989 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.873777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-config-data\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.874750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.876865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-scripts\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.881804 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.883256 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.922884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n96xn\" (UniqueName: \"kubernetes.io/projected/adcfc0c9-3a34-4a25-bea5-4015b6c70880-kube-api-access-n96xn\") pod \"nova-cell0-cell-mapping-dgz4c\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.934374 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75f6f7df9-dkzs6"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.935906 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.950940 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.952111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.959979 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-config-data\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3f5090-a442-427a-a031-801ee7a96745-logs\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962622 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962684 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962759 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-config-data\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjxt\" (UniqueName: \"kubernetes.io/projected/39f8075d-502f-4032-a5a7-ac8fe5d447bc-kube-api-access-prjxt\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f8075d-502f-4032-a5a7-ac8fe5d447bc-logs\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kf6\" (UniqueName: \"kubernetes.io/projected/2318e3d5-dca0-4623-9c71-a153ac1136c6-kube-api-access-72kf6\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.962897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5rt\" (UniqueName: \"kubernetes.io/projected/8c3f5090-a442-427a-a031-801ee7a96745-kube-api-access-pn5rt\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.975418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-config-data\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.975708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f8075d-502f-4032-a5a7-ac8fe5d447bc-logs\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.976691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.976991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3f5090-a442-427a-a031-801ee7a96745-logs\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.978035 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.980643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:49 crc kubenswrapper[4858]: I1122 09:28:49.986455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-config-data\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.002792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5rt\" (UniqueName: \"kubernetes.io/projected/8c3f5090-a442-427a-a031-801ee7a96745-kube-api-access-pn5rt\") pod \"nova-api-0\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " pod="openstack/nova-api-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.008887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjxt\" (UniqueName: \"kubernetes.io/projected/39f8075d-502f-4032-a5a7-ac8fe5d447bc-kube-api-access-prjxt\") pod \"nova-metadata-0\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " pod="openstack/nova-metadata-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.012186 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f6f7df9-dkzs6"] Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.023660 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.066370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-dns-svc\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.066475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-config\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.066517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-config-data\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.068243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59gg\" (UniqueName: \"kubernetes.io/projected/db757e00-4494-41fc-89da-db26b197f590-kube-api-access-n59gg\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.068709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kf6\" (UniqueName: \"kubernetes.io/projected/2318e3d5-dca0-4623-9c71-a153ac1136c6-kube-api-access-72kf6\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.068758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.068823 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.068957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-nb\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.069014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.069071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-sb\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.069103 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzjbw\" (UniqueName: \"kubernetes.io/projected/ce7cb707-25be-452d-8471-63bac50960b0-kube-api-access-tzjbw\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.074161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.077218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.078874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.094638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kf6\" (UniqueName: \"kubernetes.io/projected/2318e3d5-dca0-4623-9c71-a153ac1136c6-kube-api-access-72kf6\") pod \"nova-cell1-novncproxy-0\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.110600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.112839 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-dns-svc\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-config\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-config-data\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171509 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n59gg\" (UniqueName: \"kubernetes.io/projected/db757e00-4494-41fc-89da-db26b197f590-kube-api-access-n59gg\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-nb\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171841 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-sb\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.171864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzjbw\" (UniqueName: \"kubernetes.io/projected/ce7cb707-25be-452d-8471-63bac50960b0-kube-api-access-tzjbw\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.172806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-dns-svc\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.173128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-sb\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.173833 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-nb\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.175229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-config\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.178151 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.178179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-config-data\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.187237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzjbw\" (UniqueName: \"kubernetes.io/projected/ce7cb707-25be-452d-8471-63bac50960b0-kube-api-access-tzjbw\") pod \"dnsmasq-dns-75f6f7df9-dkzs6\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.193202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59gg\" (UniqueName: \"kubernetes.io/projected/db757e00-4494-41fc-89da-db26b197f590-kube-api-access-n59gg\") pod \"nova-scheduler-0\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.424690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.435312 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.623421 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dgz4c"] Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.693838 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.744685 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rrbtg"] Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.746578 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.749784 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.749848 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.755108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rrbtg"] Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.808747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dgz4c" event={"ID":"adcfc0c9-3a34-4a25-bea5-4015b6c70880","Type":"ContainerStarted","Data":"08bc6bf4db7e053f2be5d08c31c8de8fb10d990e595317cc4c837e98a385adcc"} Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.809974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c3f5090-a442-427a-a031-801ee7a96745","Type":"ContainerStarted","Data":"ef346b3f52dfbadc786ff3da0191ebe41249ba8bc63fa471aaa9356bbab7877f"} Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.832920 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:50 crc kubenswrapper[4858]: W1122 09:28:50.842604 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39f8075d_502f_4032_a5a7_ac8fe5d447bc.slice/crio-74f08fd1eab738f9293dbe9cf171a6ab59960acf18f9b50b88cd2b1cdc4e3b57 WatchSource:0}: Error finding container 74f08fd1eab738f9293dbe9cf171a6ab59960acf18f9b50b88cd2b1cdc4e3b57: Status 404 returned error can't find the container with id 74f08fd1eab738f9293dbe9cf171a6ab59960acf18f9b50b88cd2b1cdc4e3b57 Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.843427 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:28:50 crc kubenswrapper[4858]: W1122 09:28:50.849628 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2318e3d5_dca0_4623_9c71_a153ac1136c6.slice/crio-a89c7b83c63ca1d37e17c70eea6a16ee3040401f0bf0e5295ba4c4dac87707ec WatchSource:0}: Error finding container a89c7b83c63ca1d37e17c70eea6a16ee3040401f0bf0e5295ba4c4dac87707ec: Status 404 returned error can't find the container with id a89c7b83c63ca1d37e17c70eea6a16ee3040401f0bf0e5295ba4c4dac87707ec Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.891668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkjz\" (UniqueName: \"kubernetes.io/projected/2dae4660-c997-42c2-8b43-1184edd7388e-kube-api-access-6qkjz\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.891763 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-scripts\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.891870 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.891927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-config-data\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.995583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.995706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-config-data\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.995813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkjz\" (UniqueName: \"kubernetes.io/projected/2dae4660-c997-42c2-8b43-1184edd7388e-kube-api-access-6qkjz\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:50 crc kubenswrapper[4858]: I1122 09:28:50.995846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-scripts\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.012419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-scripts\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.015948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.021006 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-config-data\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.050983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkjz\" (UniqueName: \"kubernetes.io/projected/2dae4660-c997-42c2-8b43-1184edd7388e-kube-api-access-6qkjz\") pod \"nova-cell1-conductor-db-sync-rrbtg\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.097477 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.186224 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f6f7df9-dkzs6"] Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.224020 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.768695 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rrbtg"] Nov 22 09:28:51 crc kubenswrapper[4858]: W1122 09:28:51.774284 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dae4660_c997_42c2_8b43_1184edd7388e.slice/crio-a42d4f78871dd707fc88ac0b11ee51641f634ccf34c59a8b214d91c831ac57aa WatchSource:0}: Error finding container a42d4f78871dd707fc88ac0b11ee51641f634ccf34c59a8b214d91c831ac57aa: Status 404 returned error can't find the container with id a42d4f78871dd707fc88ac0b11ee51641f634ccf34c59a8b214d91c831ac57aa Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.827438 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce7cb707-25be-452d-8471-63bac50960b0" containerID="82703cf068f9f271c84721dcd01fbccd82382fb96b4e789b01bfc105b99e4b7b" exitCode=0 Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.827719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" event={"ID":"ce7cb707-25be-452d-8471-63bac50960b0","Type":"ContainerDied","Data":"82703cf068f9f271c84721dcd01fbccd82382fb96b4e789b01bfc105b99e4b7b"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.827750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" event={"ID":"ce7cb707-25be-452d-8471-63bac50960b0","Type":"ContainerStarted","Data":"f09a63462e508606c96c5873e2a0d23028ea012f53cd7b47212365820baa8fee"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.829590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dgz4c" event={"ID":"adcfc0c9-3a34-4a25-bea5-4015b6c70880","Type":"ContainerStarted","Data":"9669e26c9c448a46b92ab9848eeaf897720ba792d47e71e1f0bd65eb66502bc0"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.834922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2318e3d5-dca0-4623-9c71-a153ac1136c6","Type":"ContainerStarted","Data":"a89c7b83c63ca1d37e17c70eea6a16ee3040401f0bf0e5295ba4c4dac87707ec"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.836668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" event={"ID":"2dae4660-c997-42c2-8b43-1184edd7388e","Type":"ContainerStarted","Data":"a42d4f78871dd707fc88ac0b11ee51641f634ccf34c59a8b214d91c831ac57aa"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.838519 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db757e00-4494-41fc-89da-db26b197f590","Type":"ContainerStarted","Data":"4b86b3213cc1ea465a15dafe425c6532294d948f3358b9bd009cc7db0ff2e2f2"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.840036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39f8075d-502f-4032-a5a7-ac8fe5d447bc","Type":"ContainerStarted","Data":"74f08fd1eab738f9293dbe9cf171a6ab59960acf18f9b50b88cd2b1cdc4e3b57"} Nov 22 09:28:51 crc kubenswrapper[4858]: I1122 09:28:51.875363 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dgz4c" podStartSLOduration=2.87534169 podStartE2EDuration="2.87534169s" podCreationTimestamp="2025-11-22 09:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:51.863207773 +0000 UTC m=+8293.704630779" watchObservedRunningTime="2025-11-22 09:28:51.87534169 +0000 UTC m=+8293.716764706" Nov 22 09:28:52 crc kubenswrapper[4858]: I1122 09:28:52.857484 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" event={"ID":"2dae4660-c997-42c2-8b43-1184edd7388e","Type":"ContainerStarted","Data":"646e24902e0a82f14865582bea7fe955b2f7d63642a56f1444d831742d8a43c6"} Nov 22 09:28:52 crc kubenswrapper[4858]: I1122 09:28:52.880847 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" podStartSLOduration=2.880810048 podStartE2EDuration="2.880810048s" podCreationTimestamp="2025-11-22 09:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:52.87243169 +0000 UTC m=+8294.713854706" watchObservedRunningTime="2025-11-22 09:28:52.880810048 +0000 UTC m=+8294.722233054" Nov 22 09:28:53 crc kubenswrapper[4858]: I1122 09:28:53.871354 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" event={"ID":"ce7cb707-25be-452d-8471-63bac50960b0","Type":"ContainerStarted","Data":"f33c91ef77a22471d1aca401e99d6712203d5311feb3032dae0ac8df04c153b4"} Nov 22 09:28:53 crc kubenswrapper[4858]: I1122 09:28:53.871445 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:28:53 crc kubenswrapper[4858]: I1122 09:28:53.901354 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" podStartSLOduration=4.901311517 podStartE2EDuration="4.901311517s" podCreationTimestamp="2025-11-22 09:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:28:53.897138273 +0000 UTC m=+8295.738561289" watchObservedRunningTime="2025-11-22 09:28:53.901311517 +0000 UTC m=+8295.742734533" Nov 22 09:28:54 crc kubenswrapper[4858]: I1122 09:28:54.060531 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:54 crc kubenswrapper[4858]: I1122 09:28:54.084255 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.888940 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39f8075d-502f-4032-a5a7-ac8fe5d447bc","Type":"ContainerStarted","Data":"ed7c6f1381088a6c89c74ed03c4d860bcc35746dfe2a18970527d97e84152363"} Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.889310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39f8075d-502f-4032-a5a7-ac8fe5d447bc","Type":"ContainerStarted","Data":"ba436aad71db9f909b6e3cd25480ac6e8252ca53a01f10fe4969cb3e0e6f3000"} Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.889306 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-log" containerID="cri-o://ba436aad71db9f909b6e3cd25480ac6e8252ca53a01f10fe4969cb3e0e6f3000" gracePeriod=30 Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.889926 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-metadata" containerID="cri-o://ed7c6f1381088a6c89c74ed03c4d860bcc35746dfe2a18970527d97e84152363" gracePeriod=30 Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.895472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c3f5090-a442-427a-a031-801ee7a96745","Type":"ContainerStarted","Data":"da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e"} Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.895961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c3f5090-a442-427a-a031-801ee7a96745","Type":"ContainerStarted","Data":"45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a"} Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.898913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2318e3d5-dca0-4623-9c71-a153ac1136c6","Type":"ContainerStarted","Data":"bf78a975e9e85cb4fd472be323893a4c865331c1c2fcc41e7dafb5a79c88dcfa"} Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.898966 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2318e3d5-dca0-4623-9c71-a153ac1136c6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bf78a975e9e85cb4fd472be323893a4c865331c1c2fcc41e7dafb5a79c88dcfa" gracePeriod=30 Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.907584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db757e00-4494-41fc-89da-db26b197f590","Type":"ContainerStarted","Data":"d7ac8ccbd118017a3374ab8213f0014483278d67a520faa1f45f2a7a093a9375"} Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.909651 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.251377634 podStartE2EDuration="6.909628208s" podCreationTimestamp="2025-11-22 09:28:49 +0000 UTC" firstStartedPulling="2025-11-22 09:28:50.84854543 +0000 UTC m=+8292.689968436" lastFinishedPulling="2025-11-22 09:28:54.506796004 +0000 UTC m=+8296.348219010" observedRunningTime="2025-11-22 09:28:55.905144495 +0000 UTC m=+8297.746567501" watchObservedRunningTime="2025-11-22 09:28:55.909628208 +0000 UTC m=+8297.751051234" Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.971123 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.315186297 podStartE2EDuration="6.97093302s" podCreationTimestamp="2025-11-22 09:28:49 +0000 UTC" firstStartedPulling="2025-11-22 09:28:50.851531426 +0000 UTC m=+8292.692954432" lastFinishedPulling="2025-11-22 09:28:54.507278149 +0000 UTC m=+8296.348701155" observedRunningTime="2025-11-22 09:28:55.938629426 +0000 UTC m=+8297.780052432" watchObservedRunningTime="2025-11-22 09:28:55.97093302 +0000 UTC m=+8297.812356036" Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.972922 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.183372778 podStartE2EDuration="6.972911244s" podCreationTimestamp="2025-11-22 09:28:49 +0000 UTC" firstStartedPulling="2025-11-22 09:28:50.724862452 +0000 UTC m=+8292.566285458" lastFinishedPulling="2025-11-22 09:28:54.514400918 +0000 UTC m=+8296.355823924" observedRunningTime="2025-11-22 09:28:55.955895719 +0000 UTC m=+8297.797318725" watchObservedRunningTime="2025-11-22 09:28:55.972911244 +0000 UTC m=+8297.814334250" Nov 22 09:28:55 crc kubenswrapper[4858]: I1122 09:28:55.979221 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.718132561 podStartE2EDuration="6.979198094s" podCreationTimestamp="2025-11-22 09:28:49 +0000 UTC" firstStartedPulling="2025-11-22 09:28:51.245234575 +0000 UTC m=+8293.086657581" lastFinishedPulling="2025-11-22 09:28:54.506300108 +0000 UTC m=+8296.347723114" observedRunningTime="2025-11-22 09:28:55.970917789 +0000 UTC m=+8297.812340795" watchObservedRunningTime="2025-11-22 09:28:55.979198094 +0000 UTC m=+8297.820621100" Nov 22 09:28:56 crc kubenswrapper[4858]: I1122 09:28:56.923137 4858 generic.go:334] "Generic (PLEG): container finished" podID="2dae4660-c997-42c2-8b43-1184edd7388e" containerID="646e24902e0a82f14865582bea7fe955b2f7d63642a56f1444d831742d8a43c6" exitCode=0 Nov 22 09:28:56 crc kubenswrapper[4858]: I1122 09:28:56.923217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" event={"ID":"2dae4660-c997-42c2-8b43-1184edd7388e","Type":"ContainerDied","Data":"646e24902e0a82f14865582bea7fe955b2f7d63642a56f1444d831742d8a43c6"} Nov 22 09:28:56 crc kubenswrapper[4858]: I1122 09:28:56.931456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39f8075d-502f-4032-a5a7-ac8fe5d447bc","Type":"ContainerDied","Data":"ed7c6f1381088a6c89c74ed03c4d860bcc35746dfe2a18970527d97e84152363"} Nov 22 09:28:56 crc kubenswrapper[4858]: I1122 09:28:56.932239 4858 generic.go:334] "Generic (PLEG): container finished" podID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerID="ed7c6f1381088a6c89c74ed03c4d860bcc35746dfe2a18970527d97e84152363" exitCode=0 Nov 22 09:28:56 crc kubenswrapper[4858]: I1122 09:28:56.932288 4858 generic.go:334] "Generic (PLEG): container finished" podID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerID="ba436aad71db9f909b6e3cd25480ac6e8252ca53a01f10fe4969cb3e0e6f3000" exitCode=143 Nov 22 09:28:56 crc kubenswrapper[4858]: I1122 09:28:56.932358 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39f8075d-502f-4032-a5a7-ac8fe5d447bc","Type":"ContainerDied","Data":"ba436aad71db9f909b6e3cd25480ac6e8252ca53a01f10fe4969cb3e0e6f3000"} Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.599905 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.644497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-combined-ca-bundle\") pod \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.644561 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f8075d-502f-4032-a5a7-ac8fe5d447bc-logs\") pod \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.644693 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prjxt\" (UniqueName: \"kubernetes.io/projected/39f8075d-502f-4032-a5a7-ac8fe5d447bc-kube-api-access-prjxt\") pod \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.644817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-config-data\") pod \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\" (UID: \"39f8075d-502f-4032-a5a7-ac8fe5d447bc\") " Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.647037 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f8075d-502f-4032-a5a7-ac8fe5d447bc-logs" (OuterVolumeSpecName: "logs") pod "39f8075d-502f-4032-a5a7-ac8fe5d447bc" (UID: "39f8075d-502f-4032-a5a7-ac8fe5d447bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.652707 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f8075d-502f-4032-a5a7-ac8fe5d447bc-kube-api-access-prjxt" (OuterVolumeSpecName: "kube-api-access-prjxt") pod "39f8075d-502f-4032-a5a7-ac8fe5d447bc" (UID: "39f8075d-502f-4032-a5a7-ac8fe5d447bc"). InnerVolumeSpecName "kube-api-access-prjxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.684890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39f8075d-502f-4032-a5a7-ac8fe5d447bc" (UID: "39f8075d-502f-4032-a5a7-ac8fe5d447bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.687618 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-config-data" (OuterVolumeSpecName: "config-data") pod "39f8075d-502f-4032-a5a7-ac8fe5d447bc" (UID: "39f8075d-502f-4032-a5a7-ac8fe5d447bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.747736 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.747787 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f8075d-502f-4032-a5a7-ac8fe5d447bc-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.747802 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prjxt\" (UniqueName: \"kubernetes.io/projected/39f8075d-502f-4032-a5a7-ac8fe5d447bc-kube-api-access-prjxt\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.747820 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f8075d-502f-4032-a5a7-ac8fe5d447bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.947352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39f8075d-502f-4032-a5a7-ac8fe5d447bc","Type":"ContainerDied","Data":"74f08fd1eab738f9293dbe9cf171a6ab59960acf18f9b50b88cd2b1cdc4e3b57"} Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.947402 4858 scope.go:117] "RemoveContainer" containerID="ed7c6f1381088a6c89c74ed03c4d860bcc35746dfe2a18970527d97e84152363" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.947462 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.951478 4858 generic.go:334] "Generic (PLEG): container finished" podID="adcfc0c9-3a34-4a25-bea5-4015b6c70880" containerID="9669e26c9c448a46b92ab9848eeaf897720ba792d47e71e1f0bd65eb66502bc0" exitCode=0 Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.951528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dgz4c" event={"ID":"adcfc0c9-3a34-4a25-bea5-4015b6c70880","Type":"ContainerDied","Data":"9669e26c9c448a46b92ab9848eeaf897720ba792d47e71e1f0bd65eb66502bc0"} Nov 22 09:28:57 crc kubenswrapper[4858]: I1122 09:28:57.990382 4858 scope.go:117] "RemoveContainer" containerID="ba436aad71db9f909b6e3cd25480ac6e8252ca53a01f10fe4969cb3e0e6f3000" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.023385 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.034860 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.046425 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:58 crc kubenswrapper[4858]: E1122 09:28:58.046992 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-metadata" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.047016 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-metadata" Nov 22 09:28:58 crc kubenswrapper[4858]: E1122 09:28:58.047064 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-log" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.047076 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-log" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.047429 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-metadata" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.047479 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" containerName="nova-metadata-log" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.049102 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.067064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.067407 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.096813 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.177642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-logs\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.177691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.177821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkxc\" (UniqueName: \"kubernetes.io/projected/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-kube-api-access-9dkxc\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.177990 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-config-data\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.178309 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.279942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-config-data\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.280361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.280409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-logs\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.280433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.280468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkxc\" (UniqueName: \"kubernetes.io/projected/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-kube-api-access-9dkxc\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.281469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-logs\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.286086 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.286449 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-config-data\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.287970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.298208 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkxc\" (UniqueName: \"kubernetes.io/projected/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-kube-api-access-9dkxc\") pod \"nova-metadata-0\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.363718 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.412463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.486138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-combined-ca-bundle\") pod \"2dae4660-c997-42c2-8b43-1184edd7388e\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.486376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qkjz\" (UniqueName: \"kubernetes.io/projected/2dae4660-c997-42c2-8b43-1184edd7388e-kube-api-access-6qkjz\") pod \"2dae4660-c997-42c2-8b43-1184edd7388e\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.486438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-scripts\") pod \"2dae4660-c997-42c2-8b43-1184edd7388e\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.486471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-config-data\") pod \"2dae4660-c997-42c2-8b43-1184edd7388e\" (UID: \"2dae4660-c997-42c2-8b43-1184edd7388e\") " Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.490607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-scripts" (OuterVolumeSpecName: "scripts") pod "2dae4660-c997-42c2-8b43-1184edd7388e" (UID: "2dae4660-c997-42c2-8b43-1184edd7388e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.491793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dae4660-c997-42c2-8b43-1184edd7388e-kube-api-access-6qkjz" (OuterVolumeSpecName: "kube-api-access-6qkjz") pod "2dae4660-c997-42c2-8b43-1184edd7388e" (UID: "2dae4660-c997-42c2-8b43-1184edd7388e"). InnerVolumeSpecName "kube-api-access-6qkjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.520449 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-config-data" (OuterVolumeSpecName: "config-data") pod "2dae4660-c997-42c2-8b43-1184edd7388e" (UID: "2dae4660-c997-42c2-8b43-1184edd7388e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:58 crc kubenswrapper[4858]: I1122 09:28:58.521804 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dae4660-c997-42c2-8b43-1184edd7388e" (UID: "2dae4660-c997-42c2-8b43-1184edd7388e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.590394 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.590689 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.590702 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dae4660-c997-42c2-8b43-1184edd7388e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.590716 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qkjz\" (UniqueName: \"kubernetes.io/projected/2dae4660-c997-42c2-8b43-1184edd7388e-kube-api-access-6qkjz\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.971498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" event={"ID":"2dae4660-c997-42c2-8b43-1184edd7388e","Type":"ContainerDied","Data":"a42d4f78871dd707fc88ac0b11ee51641f634ccf34c59a8b214d91c831ac57aa"} Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.971556 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42d4f78871dd707fc88ac0b11ee51641f634ccf34c59a8b214d91c831ac57aa" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:58.971559 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rrbtg" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.034013 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:28:59 crc kubenswrapper[4858]: E1122 09:28:59.034543 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dae4660-c997-42c2-8b43-1184edd7388e" containerName="nova-cell1-conductor-db-sync" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.034560 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dae4660-c997-42c2-8b43-1184edd7388e" containerName="nova-cell1-conductor-db-sync" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.034745 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dae4660-c997-42c2-8b43-1184edd7388e" containerName="nova-cell1-conductor-db-sync" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.035495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.041039 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.058367 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.102333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.102609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.102665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858lb\" (UniqueName: \"kubernetes.io/projected/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-kube-api-access-858lb\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.204310 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858lb\" (UniqueName: \"kubernetes.io/projected/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-kube-api-access-858lb\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.204488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.204759 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.210171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.211658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.223591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858lb\" (UniqueName: \"kubernetes.io/projected/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-kube-api-access-858lb\") pod \"nova-cell1-conductor-0\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.362013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.563094 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f8075d-502f-4032-a5a7-ac8fe5d447bc" path="/var/lib/kubelet/pods/39f8075d-502f-4032-a5a7-ac8fe5d447bc/volumes" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.563855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.635408 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.719194 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-combined-ca-bundle\") pod \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.719458 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n96xn\" (UniqueName: \"kubernetes.io/projected/adcfc0c9-3a34-4a25-bea5-4015b6c70880-kube-api-access-n96xn\") pod \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.719480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-config-data\") pod \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.719504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-scripts\") pod \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\" (UID: \"adcfc0c9-3a34-4a25-bea5-4015b6c70880\") " Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.723747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-scripts" (OuterVolumeSpecName: "scripts") pod "adcfc0c9-3a34-4a25-bea5-4015b6c70880" (UID: "adcfc0c9-3a34-4a25-bea5-4015b6c70880"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.724170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adcfc0c9-3a34-4a25-bea5-4015b6c70880-kube-api-access-n96xn" (OuterVolumeSpecName: "kube-api-access-n96xn") pod "adcfc0c9-3a34-4a25-bea5-4015b6c70880" (UID: "adcfc0c9-3a34-4a25-bea5-4015b6c70880"). InnerVolumeSpecName "kube-api-access-n96xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.749793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adcfc0c9-3a34-4a25-bea5-4015b6c70880" (UID: "adcfc0c9-3a34-4a25-bea5-4015b6c70880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.754291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-config-data" (OuterVolumeSpecName: "config-data") pod "adcfc0c9-3a34-4a25-bea5-4015b6c70880" (UID: "adcfc0c9-3a34-4a25-bea5-4015b6c70880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.815787 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.821568 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n96xn\" (UniqueName: \"kubernetes.io/projected/adcfc0c9-3a34-4a25-bea5-4015b6c70880-kube-api-access-n96xn\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.821600 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.821613 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.821625 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adcfc0c9-3a34-4a25-bea5-4015b6c70880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.991130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dgz4c" event={"ID":"adcfc0c9-3a34-4a25-bea5-4015b6c70880","Type":"ContainerDied","Data":"08bc6bf4db7e053f2be5d08c31c8de8fb10d990e595317cc4c837e98a385adcc"} Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.991176 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08bc6bf4db7e053f2be5d08c31c8de8fb10d990e595317cc4c837e98a385adcc" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.991234 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dgz4c" Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.996864 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff","Type":"ContainerStarted","Data":"60f74af907b70ecb84f1d003df80f0b9efd231fc6135a270c4e922eaf40dc680"} Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.999708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"040f18dd-ac17-4b6f-8a5d-5812cfce06fa","Type":"ContainerStarted","Data":"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5"} Nov 22 09:28:59 crc kubenswrapper[4858]: I1122 09:28:59.999749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"040f18dd-ac17-4b6f-8a5d-5812cfce06fa","Type":"ContainerStarted","Data":"f8c2530d0455bc6b1dc3f307284e12dc67bd3d801e77cd69500db1b97d2b5750"} Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.076893 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.077679 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.111216 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.160313 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.172121 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.172388 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="db757e00-4494-41fc-89da-db26b197f590" containerName="nova-scheduler-scheduler" containerID="cri-o://d7ac8ccbd118017a3374ab8213f0014483278d67a520faa1f45f2a7a093a9375" gracePeriod=30 Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.183731 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.426481 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.436790 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.493416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c4fd8b9f9-jb85q"] Nov 22 09:29:00 crc kubenswrapper[4858]: I1122 09:29:00.498741 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerName="dnsmasq-dns" containerID="cri-o://6d1e7897a4158f0fb4cea63549ba7760a73b06807cc3a1fd286ce2faf340beb2" gracePeriod=10 Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.024099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff","Type":"ContainerStarted","Data":"94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2"} Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.024348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.034257 4858 generic.go:334] "Generic (PLEG): container finished" podID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerID="6d1e7897a4158f0fb4cea63549ba7760a73b06807cc3a1fd286ce2faf340beb2" exitCode=0 Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.034424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" event={"ID":"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557","Type":"ContainerDied","Data":"6d1e7897a4158f0fb4cea63549ba7760a73b06807cc3a1fd286ce2faf340beb2"} Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.038483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"040f18dd-ac17-4b6f-8a5d-5812cfce06fa","Type":"ContainerStarted","Data":"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147"} Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.046524 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.046504011 podStartE2EDuration="2.046504011s" podCreationTimestamp="2025-11-22 09:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:01.039378903 +0000 UTC m=+8302.880801939" watchObservedRunningTime="2025-11-22 09:29:01.046504011 +0000 UTC m=+8302.887927027" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.159801 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.96:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.162767 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.96:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.178433 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.199960 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.199933812 podStartE2EDuration="3.199933812s" podCreationTimestamp="2025-11-22 09:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:01.085941203 +0000 UTC m=+8302.927364209" watchObservedRunningTime="2025-11-22 09:29:01.199933812 +0000 UTC m=+8303.041356818" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.250989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-dns-svc\") pod \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.251483 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-config\") pod \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.251607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-sb\") pod \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.251752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssbzx\" (UniqueName: \"kubernetes.io/projected/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-kube-api-access-ssbzx\") pod \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.251926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-nb\") pod \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\" (UID: \"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557\") " Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.266219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-kube-api-access-ssbzx" (OuterVolumeSpecName: "kube-api-access-ssbzx") pod "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" (UID: "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557"). InnerVolumeSpecName "kube-api-access-ssbzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.308167 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" (UID: "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.318257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" (UID: "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.320776 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" (UID: "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.348486 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-config" (OuterVolumeSpecName: "config") pod "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" (UID: "cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.354776 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.354807 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.354818 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.354829 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:01 crc kubenswrapper[4858]: I1122 09:29:01.354840 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssbzx\" (UniqueName: \"kubernetes.io/projected/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557-kube-api-access-ssbzx\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.047898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" event={"ID":"cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557","Type":"ContainerDied","Data":"dea8aba12961477a565e2fea4cc31614d1302595eba7792d3d9dfc16f3d0a83f"} Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.047964 4858 scope.go:117] "RemoveContainer" containerID="6d1e7897a4158f0fb4cea63549ba7760a73b06807cc3a1fd286ce2faf340beb2" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.047975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4fd8b9f9-jb85q" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.048101 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-log" containerID="cri-o://e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5" gracePeriod=30 Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.048260 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-metadata" containerID="cri-o://82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147" gracePeriod=30 Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.048869 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-log" containerID="cri-o://45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a" gracePeriod=30 Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.049053 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-api" containerID="cri-o://da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e" gracePeriod=30 Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.074009 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c4fd8b9f9-jb85q"] Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.083700 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c4fd8b9f9-jb85q"] Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.086958 4858 scope.go:117] "RemoveContainer" containerID="bcfb15f6d2bcb0fa633452f32421f7e46daee59e27b0fb89f145eb820d736272" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.685247 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.784480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dkxc\" (UniqueName: \"kubernetes.io/projected/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-kube-api-access-9dkxc\") pod \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.784568 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-combined-ca-bundle\") pod \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.784677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-nova-metadata-tls-certs\") pod \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.784744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-config-data\") pod \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.784817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-logs\") pod \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\" (UID: \"040f18dd-ac17-4b6f-8a5d-5812cfce06fa\") " Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.785676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-logs" (OuterVolumeSpecName: "logs") pod "040f18dd-ac17-4b6f-8a5d-5812cfce06fa" (UID: "040f18dd-ac17-4b6f-8a5d-5812cfce06fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.805984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-kube-api-access-9dkxc" (OuterVolumeSpecName: "kube-api-access-9dkxc") pod "040f18dd-ac17-4b6f-8a5d-5812cfce06fa" (UID: "040f18dd-ac17-4b6f-8a5d-5812cfce06fa"). InnerVolumeSpecName "kube-api-access-9dkxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.809511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "040f18dd-ac17-4b6f-8a5d-5812cfce06fa" (UID: "040f18dd-ac17-4b6f-8a5d-5812cfce06fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.810554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-config-data" (OuterVolumeSpecName: "config-data") pod "040f18dd-ac17-4b6f-8a5d-5812cfce06fa" (UID: "040f18dd-ac17-4b6f-8a5d-5812cfce06fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.832143 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "040f18dd-ac17-4b6f-8a5d-5812cfce06fa" (UID: "040f18dd-ac17-4b6f-8a5d-5812cfce06fa"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.887064 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.887097 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.887106 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.887115 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dkxc\" (UniqueName: \"kubernetes.io/projected/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-kube-api-access-9dkxc\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:02 crc kubenswrapper[4858]: I1122 09:29:02.887123 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040f18dd-ac17-4b6f-8a5d-5812cfce06fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.061299 4858 generic.go:334] "Generic (PLEG): container finished" podID="8c3f5090-a442-427a-a031-801ee7a96745" containerID="45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a" exitCode=143 Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.061425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c3f5090-a442-427a-a031-801ee7a96745","Type":"ContainerDied","Data":"45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a"} Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066070 4858 generic.go:334] "Generic (PLEG): container finished" podID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerID="82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147" exitCode=0 Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066094 4858 generic.go:334] "Generic (PLEG): container finished" podID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerID="e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5" exitCode=143 Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066115 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"040f18dd-ac17-4b6f-8a5d-5812cfce06fa","Type":"ContainerDied","Data":"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147"} Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"040f18dd-ac17-4b6f-8a5d-5812cfce06fa","Type":"ContainerDied","Data":"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5"} Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"040f18dd-ac17-4b6f-8a5d-5812cfce06fa","Type":"ContainerDied","Data":"f8c2530d0455bc6b1dc3f307284e12dc67bd3d801e77cd69500db1b97d2b5750"} Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066175 4858 scope.go:117] "RemoveContainer" containerID="82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.066534 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.091379 4858 scope.go:117] "RemoveContainer" containerID="e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.118015 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.127370 4858 scope.go:117] "RemoveContainer" containerID="82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147" Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.129619 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147\": container with ID starting with 82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147 not found: ID does not exist" containerID="82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.129690 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147"} err="failed to get container status \"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147\": rpc error: code = NotFound desc = could not find container \"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147\": container with ID starting with 82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147 not found: ID does not exist" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.129738 4858 scope.go:117] "RemoveContainer" containerID="e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5" Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.130428 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5\": container with ID starting with e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5 not found: ID does not exist" containerID="e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.130464 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5"} err="failed to get container status \"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5\": rpc error: code = NotFound desc = could not find container \"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5\": container with ID starting with e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5 not found: ID does not exist" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.130488 4858 scope.go:117] "RemoveContainer" containerID="82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.130796 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147"} err="failed to get container status \"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147\": rpc error: code = NotFound desc = could not find container \"82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147\": container with ID starting with 82954713aa842d5f59f3d0f77e94ea2d013c57c36edc293a7a3a9291b810e147 not found: ID does not exist" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.130825 4858 scope.go:117] "RemoveContainer" containerID="e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.131310 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5"} err="failed to get container status \"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5\": rpc error: code = NotFound desc = could not find container \"e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5\": container with ID starting with e2e7ef770ae1c4a6112157581baffe8b52024a67a59d219e196c90bbdb263cd5 not found: ID does not exist" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.136406 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.149505 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.150260 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-metadata" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150293 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-metadata" Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.150348 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adcfc0c9-3a34-4a25-bea5-4015b6c70880" containerName="nova-manage" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150362 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="adcfc0c9-3a34-4a25-bea5-4015b6c70880" containerName="nova-manage" Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.150396 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerName="init" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150410 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerName="init" Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.150429 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-log" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150443 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-log" Nov 22 09:29:03 crc kubenswrapper[4858]: E1122 09:29:03.150517 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerName="dnsmasq-dns" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150531 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerName="dnsmasq-dns" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150851 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" containerName="dnsmasq-dns" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150888 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-metadata" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150932 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" containerName="nova-metadata-log" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.150960 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="adcfc0c9-3a34-4a25-bea5-4015b6c70880" containerName="nova-manage" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.154898 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.160050 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.161605 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.161857 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.294825 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.295017 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-config-data\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.295291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mns6q\" (UniqueName: \"kubernetes.io/projected/5d875d57-7b5c-405b-a183-3cad85f16980-kube-api-access-mns6q\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.295468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d875d57-7b5c-405b-a183-3cad85f16980-logs\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.295658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.397261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.397413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-config-data\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.397477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mns6q\" (UniqueName: \"kubernetes.io/projected/5d875d57-7b5c-405b-a183-3cad85f16980-kube-api-access-mns6q\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.397519 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d875d57-7b5c-405b-a183-3cad85f16980-logs\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.397576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.398860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d875d57-7b5c-405b-a183-3cad85f16980-logs\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.403669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.404108 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.404303 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-config-data\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.420258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mns6q\" (UniqueName: \"kubernetes.io/projected/5d875d57-7b5c-405b-a183-3cad85f16980-kube-api-access-mns6q\") pod \"nova-metadata-0\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.485959 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.559619 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040f18dd-ac17-4b6f-8a5d-5812cfce06fa" path="/var/lib/kubelet/pods/040f18dd-ac17-4b6f-8a5d-5812cfce06fa/volumes" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.560956 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557" path="/var/lib/kubelet/pods/cf8d0c3a-c3c4-4dd7-a257-285ee1b6b557/volumes" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.800219 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lm8x"] Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.802179 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.812771 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lm8x"] Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.907007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5vs\" (UniqueName: \"kubernetes.io/projected/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-kube-api-access-vr5vs\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.907137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-utilities\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.907168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-catalog-content\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:03 crc kubenswrapper[4858]: I1122 09:29:03.970668 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:03 crc kubenswrapper[4858]: W1122 09:29:03.974018 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d875d57_7b5c_405b_a183_3cad85f16980.slice/crio-1bf73107ce27688f26a7ac3927456341d5a4a44c2abc7e3558acb61a984df6f0 WatchSource:0}: Error finding container 1bf73107ce27688f26a7ac3927456341d5a4a44c2abc7e3558acb61a984df6f0: Status 404 returned error can't find the container with id 1bf73107ce27688f26a7ac3927456341d5a4a44c2abc7e3558acb61a984df6f0 Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.009289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-utilities\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.009371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-catalog-content\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.009516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5vs\" (UniqueName: \"kubernetes.io/projected/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-kube-api-access-vr5vs\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.009874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-utilities\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.009890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-catalog-content\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.030156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5vs\" (UniqueName: \"kubernetes.io/projected/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-kube-api-access-vr5vs\") pod \"certified-operators-7lm8x\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.097663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d875d57-7b5c-405b-a183-3cad85f16980","Type":"ContainerStarted","Data":"1bf73107ce27688f26a7ac3927456341d5a4a44c2abc7e3558acb61a984df6f0"} Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.123147 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:04 crc kubenswrapper[4858]: I1122 09:29:04.672045 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lm8x"] Nov 22 09:29:05 crc kubenswrapper[4858]: I1122 09:29:05.111710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d875d57-7b5c-405b-a183-3cad85f16980","Type":"ContainerStarted","Data":"19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8"} Nov 22 09:29:05 crc kubenswrapper[4858]: I1122 09:29:05.112168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d875d57-7b5c-405b-a183-3cad85f16980","Type":"ContainerStarted","Data":"a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79"} Nov 22 09:29:05 crc kubenswrapper[4858]: I1122 09:29:05.116404 4858 generic.go:334] "Generic (PLEG): container finished" podID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerID="d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1" exitCode=0 Nov 22 09:29:05 crc kubenswrapper[4858]: I1122 09:29:05.116442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerDied","Data":"d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1"} Nov 22 09:29:05 crc kubenswrapper[4858]: I1122 09:29:05.116463 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerStarted","Data":"fe311df40acb79877ad6c29c88cc7b7e6b725cc0b56a65fa12e297e8dd3b1bd3"} Nov 22 09:29:05 crc kubenswrapper[4858]: I1122 09:29:05.152717 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.15269345 podStartE2EDuration="2.15269345s" podCreationTimestamp="2025-11-22 09:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:05.14176166 +0000 UTC m=+8306.983184676" watchObservedRunningTime="2025-11-22 09:29:05.15269345 +0000 UTC m=+8306.994116456" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.125871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerStarted","Data":"6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda"} Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.202297 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-858td"] Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.205204 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.215852 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-858td"] Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.366413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pkc\" (UniqueName: \"kubernetes.io/projected/2e47a092-2758-4294-82f3-6b7baf0fc912-kube-api-access-n9pkc\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.366701 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-catalog-content\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.367108 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-utilities\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.470375 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-utilities\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.470647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pkc\" (UniqueName: \"kubernetes.io/projected/2e47a092-2758-4294-82f3-6b7baf0fc912-kube-api-access-n9pkc\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.470733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-catalog-content\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.470896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-utilities\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.471246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-catalog-content\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.490918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pkc\" (UniqueName: \"kubernetes.io/projected/2e47a092-2758-4294-82f3-6b7baf0fc912-kube-api-access-n9pkc\") pod \"redhat-operators-858td\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.545503 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:06 crc kubenswrapper[4858]: I1122 09:29:06.995569 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-858td"] Nov 22 09:29:06 crc kubenswrapper[4858]: W1122 09:29:06.999536 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e47a092_2758_4294_82f3_6b7baf0fc912.slice/crio-d9b65b59660c9e4d5147ffc5b08f5fb29c2c1a0e270155d7c35e26e613437f22 WatchSource:0}: Error finding container d9b65b59660c9e4d5147ffc5b08f5fb29c2c1a0e270155d7c35e26e613437f22: Status 404 returned error can't find the container with id d9b65b59660c9e4d5147ffc5b08f5fb29c2c1a0e270155d7c35e26e613437f22 Nov 22 09:29:07 crc kubenswrapper[4858]: I1122 09:29:07.135149 4858 generic.go:334] "Generic (PLEG): container finished" podID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerID="6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda" exitCode=0 Nov 22 09:29:07 crc kubenswrapper[4858]: I1122 09:29:07.135205 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerDied","Data":"6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda"} Nov 22 09:29:07 crc kubenswrapper[4858]: I1122 09:29:07.137893 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerStarted","Data":"d9b65b59660c9e4d5147ffc5b08f5fb29c2c1a0e270155d7c35e26e613437f22"} Nov 22 09:29:08 crc kubenswrapper[4858]: I1122 09:29:08.149595 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerID="b0626edc2769d88f87bfaf0d74800221c0acf3e64782837700281b9f040ec63d" exitCode=0 Nov 22 09:29:08 crc kubenswrapper[4858]: I1122 09:29:08.149701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerDied","Data":"b0626edc2769d88f87bfaf0d74800221c0acf3e64782837700281b9f040ec63d"} Nov 22 09:29:08 crc kubenswrapper[4858]: I1122 09:29:08.152821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerStarted","Data":"c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7"} Nov 22 09:29:08 crc kubenswrapper[4858]: I1122 09:29:08.196510 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lm8x" podStartSLOduration=2.797174004 podStartE2EDuration="5.196483229s" podCreationTimestamp="2025-11-22 09:29:03 +0000 UTC" firstStartedPulling="2025-11-22 09:29:05.119140826 +0000 UTC m=+8306.960563872" lastFinishedPulling="2025-11-22 09:29:07.518450101 +0000 UTC m=+8309.359873097" observedRunningTime="2025-11-22 09:29:08.193755301 +0000 UTC m=+8310.035178327" watchObservedRunningTime="2025-11-22 09:29:08.196483229 +0000 UTC m=+8310.037906245" Nov 22 09:29:08 crc kubenswrapper[4858]: I1122 09:29:08.486937 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 09:29:08 crc kubenswrapper[4858]: I1122 09:29:08.487003 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 09:29:09 crc kubenswrapper[4858]: I1122 09:29:09.163672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerStarted","Data":"42dda4d7c013497225a600830b444227d481bbe03d6918515af62151977ed172"} Nov 22 09:29:09 crc kubenswrapper[4858]: I1122 09:29:09.393554 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 09:29:12 crc kubenswrapper[4858]: I1122 09:29:12.199749 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerID="42dda4d7c013497225a600830b444227d481bbe03d6918515af62151977ed172" exitCode=0 Nov 22 09:29:12 crc kubenswrapper[4858]: I1122 09:29:12.199809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerDied","Data":"42dda4d7c013497225a600830b444227d481bbe03d6918515af62151977ed172"} Nov 22 09:29:13 crc kubenswrapper[4858]: I1122 09:29:13.486289 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 09:29:13 crc kubenswrapper[4858]: I1122 09:29:13.486697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 09:29:14 crc kubenswrapper[4858]: I1122 09:29:14.124358 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:14 crc kubenswrapper[4858]: I1122 09:29:14.124446 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:14 crc kubenswrapper[4858]: I1122 09:29:14.192537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:14 crc kubenswrapper[4858]: I1122 09:29:14.302579 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:15 crc kubenswrapper[4858]: I1122 09:29:14.501670 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:15 crc kubenswrapper[4858]: I1122 09:29:14.501728 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:15 crc kubenswrapper[4858]: I1122 09:29:15.119621 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:29:15 crc kubenswrapper[4858]: I1122 09:29:15.120540 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:29:15 crc kubenswrapper[4858]: I1122 09:29:15.187727 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lm8x"] Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.156511 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.245863 4858 generic.go:334] "Generic (PLEG): container finished" podID="8c3f5090-a442-427a-a031-801ee7a96745" containerID="da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e" exitCode=0 Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.245917 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.245975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c3f5090-a442-427a-a031-801ee7a96745","Type":"ContainerDied","Data":"da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e"} Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.246060 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c3f5090-a442-427a-a031-801ee7a96745","Type":"ContainerDied","Data":"ef346b3f52dfbadc786ff3da0191ebe41249ba8bc63fa471aaa9356bbab7877f"} Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.246088 4858 scope.go:117] "RemoveContainer" containerID="da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.253868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerStarted","Data":"a64434333a390bb571799f987682e4b16aecc0d7ccdb263e229d6a02273f9251"} Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.254033 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lm8x" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="registry-server" containerID="cri-o://c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7" gracePeriod=2 Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.269208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3f5090-a442-427a-a031-801ee7a96745-logs\") pod \"8c3f5090-a442-427a-a031-801ee7a96745\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.269742 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5rt\" (UniqueName: \"kubernetes.io/projected/8c3f5090-a442-427a-a031-801ee7a96745-kube-api-access-pn5rt\") pod \"8c3f5090-a442-427a-a031-801ee7a96745\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.269864 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3f5090-a442-427a-a031-801ee7a96745-logs" (OuterVolumeSpecName: "logs") pod "8c3f5090-a442-427a-a031-801ee7a96745" (UID: "8c3f5090-a442-427a-a031-801ee7a96745"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.271098 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-combined-ca-bundle\") pod \"8c3f5090-a442-427a-a031-801ee7a96745\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.271525 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-config-data\") pod \"8c3f5090-a442-427a-a031-801ee7a96745\" (UID: \"8c3f5090-a442-427a-a031-801ee7a96745\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.273897 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3f5090-a442-427a-a031-801ee7a96745-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.291271 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-858td" podStartSLOduration=2.706580765 podStartE2EDuration="10.291243063s" podCreationTimestamp="2025-11-22 09:29:06 +0000 UTC" firstStartedPulling="2025-11-22 09:29:08.151212351 +0000 UTC m=+8309.992635367" lastFinishedPulling="2025-11-22 09:29:15.735874659 +0000 UTC m=+8317.577297665" observedRunningTime="2025-11-22 09:29:16.28395511 +0000 UTC m=+8318.125378116" watchObservedRunningTime="2025-11-22 09:29:16.291243063 +0000 UTC m=+8318.132666069" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.292060 4858 scope.go:117] "RemoveContainer" containerID="45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.316101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3f5090-a442-427a-a031-801ee7a96745-kube-api-access-pn5rt" (OuterVolumeSpecName: "kube-api-access-pn5rt") pod "8c3f5090-a442-427a-a031-801ee7a96745" (UID: "8c3f5090-a442-427a-a031-801ee7a96745"). InnerVolumeSpecName "kube-api-access-pn5rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.330712 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-config-data" (OuterVolumeSpecName: "config-data") pod "8c3f5090-a442-427a-a031-801ee7a96745" (UID: "8c3f5090-a442-427a-a031-801ee7a96745"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.334494 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c3f5090-a442-427a-a031-801ee7a96745" (UID: "8c3f5090-a442-427a-a031-801ee7a96745"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.377122 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5rt\" (UniqueName: \"kubernetes.io/projected/8c3f5090-a442-427a-a031-801ee7a96745-kube-api-access-pn5rt\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.377165 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.377179 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c3f5090-a442-427a-a031-801ee7a96745-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.397393 4858 scope.go:117] "RemoveContainer" containerID="da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e" Nov 22 09:29:16 crc kubenswrapper[4858]: E1122 09:29:16.398043 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e\": container with ID starting with da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e not found: ID does not exist" containerID="da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.398079 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e"} err="failed to get container status \"da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e\": rpc error: code = NotFound desc = could not find container \"da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e\": container with ID starting with da7fb62265cc92470037067ba6f593ba1fec336c2d56bf5afa422bc0433a4b3e not found: ID does not exist" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.398102 4858 scope.go:117] "RemoveContainer" containerID="45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a" Nov 22 09:29:16 crc kubenswrapper[4858]: E1122 09:29:16.402533 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a\": container with ID starting with 45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a not found: ID does not exist" containerID="45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.402581 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a"} err="failed to get container status \"45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a\": rpc error: code = NotFound desc = could not find container \"45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a\": container with ID starting with 45758d8d32cc064ce93747b6609a05f3dbaa0e23f90b7cab7a52da54a0a7f74a not found: ID does not exist" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.552547 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.552897 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.595127 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.622767 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.634887 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:16 crc kubenswrapper[4858]: E1122 09:29:16.635348 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-api" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.635363 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-api" Nov 22 09:29:16 crc kubenswrapper[4858]: E1122 09:29:16.635383 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-log" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.635392 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-log" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.635610 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-log" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.635645 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3f5090-a442-427a-a031-801ee7a96745" containerName="nova-api-api" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.636849 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.639661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.645830 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.738641 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.791634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-config-data\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.791692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-logs\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.791722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.791773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6k5f\" (UniqueName: \"kubernetes.io/projected/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-kube-api-access-w6k5f\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.893524 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-utilities\") pod \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.893585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-catalog-content\") pod \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.893836 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-utilities" (OuterVolumeSpecName: "utilities") pod "0085c7bc-c542-4c44-a178-2a22bfe4ac8e" (UID: "0085c7bc-c542-4c44-a178-2a22bfe4ac8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.901497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr5vs\" (UniqueName: \"kubernetes.io/projected/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-kube-api-access-vr5vs\") pod \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\" (UID: \"0085c7bc-c542-4c44-a178-2a22bfe4ac8e\") " Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.901800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-config-data\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.901851 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-logs\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.901887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.901946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6k5f\" (UniqueName: \"kubernetes.io/projected/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-kube-api-access-w6k5f\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.902068 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.902632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-logs\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.906927 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-config-data\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.913516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.913807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-kube-api-access-vr5vs" (OuterVolumeSpecName: "kube-api-access-vr5vs") pod "0085c7bc-c542-4c44-a178-2a22bfe4ac8e" (UID: "0085c7bc-c542-4c44-a178-2a22bfe4ac8e"). InnerVolumeSpecName "kube-api-access-vr5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.921912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6k5f\" (UniqueName: \"kubernetes.io/projected/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-kube-api-access-w6k5f\") pod \"nova-api-0\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " pod="openstack/nova-api-0" Nov 22 09:29:16 crc kubenswrapper[4858]: I1122 09:29:16.942709 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0085c7bc-c542-4c44-a178-2a22bfe4ac8e" (UID: "0085c7bc-c542-4c44-a178-2a22bfe4ac8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.004071 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr5vs\" (UniqueName: \"kubernetes.io/projected/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-kube-api-access-vr5vs\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.004107 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0085c7bc-c542-4c44-a178-2a22bfe4ac8e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.039175 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.319429 4858 generic.go:334] "Generic (PLEG): container finished" podID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerID="c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7" exitCode=0 Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.319471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerDied","Data":"c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7"} Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.319821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lm8x" event={"ID":"0085c7bc-c542-4c44-a178-2a22bfe4ac8e","Type":"ContainerDied","Data":"fe311df40acb79877ad6c29c88cc7b7e6b725cc0b56a65fa12e297e8dd3b1bd3"} Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.319569 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lm8x" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.319876 4858 scope.go:117] "RemoveContainer" containerID="c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.354450 4858 scope.go:117] "RemoveContainer" containerID="6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.357453 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lm8x"] Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.365885 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lm8x"] Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.387865 4858 scope.go:117] "RemoveContainer" containerID="d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.407885 4858 scope.go:117] "RemoveContainer" containerID="c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7" Nov 22 09:29:17 crc kubenswrapper[4858]: E1122 09:29:17.408466 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7\": container with ID starting with c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7 not found: ID does not exist" containerID="c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.408518 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7"} err="failed to get container status \"c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7\": rpc error: code = NotFound desc = could not find container \"c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7\": container with ID starting with c2b04b3b67e634c7ba1c279df244101d3482aadbf22351a089e81580d8bf5ba7 not found: ID does not exist" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.408554 4858 scope.go:117] "RemoveContainer" containerID="6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda" Nov 22 09:29:17 crc kubenswrapper[4858]: E1122 09:29:17.408953 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda\": container with ID starting with 6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda not found: ID does not exist" containerID="6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.408998 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda"} err="failed to get container status \"6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda\": rpc error: code = NotFound desc = could not find container \"6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda\": container with ID starting with 6f067575e375ba1cd2de337302c92da825f8f4551c1db169d42a49494fcd9eda not found: ID does not exist" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.409032 4858 scope.go:117] "RemoveContainer" containerID="d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1" Nov 22 09:29:17 crc kubenswrapper[4858]: E1122 09:29:17.409586 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1\": container with ID starting with d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1 not found: ID does not exist" containerID="d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.409625 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1"} err="failed to get container status \"d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1\": rpc error: code = NotFound desc = could not find container \"d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1\": container with ID starting with d142954fa456c698f2397f7141e625677c954e81c2a7a5881c2ba400edfec4e1 not found: ID does not exist" Nov 22 09:29:17 crc kubenswrapper[4858]: W1122 09:29:17.479449 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6ad8195_9b93_4c3b_9142_1ec21a04e87b.slice/crio-9bc7326c8d9c1dcb80e37482722e7180779f068f79e2b581782263879fe7fe27 WatchSource:0}: Error finding container 9bc7326c8d9c1dcb80e37482722e7180779f068f79e2b581782263879fe7fe27: Status 404 returned error can't find the container with id 9bc7326c8d9c1dcb80e37482722e7180779f068f79e2b581782263879fe7fe27 Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.481456 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.547267 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" path="/var/lib/kubelet/pods/0085c7bc-c542-4c44-a178-2a22bfe4ac8e/volumes" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.548029 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3f5090-a442-427a-a031-801ee7a96745" path="/var/lib/kubelet/pods/8c3f5090-a442-427a-a031-801ee7a96745/volumes" Nov 22 09:29:17 crc kubenswrapper[4858]: I1122 09:29:17.619390 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-858td" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="registry-server" probeResult="failure" output=< Nov 22 09:29:17 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 09:29:17 crc kubenswrapper[4858]: > Nov 22 09:29:18 crc kubenswrapper[4858]: I1122 09:29:18.331132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6ad8195-9b93-4c3b-9142-1ec21a04e87b","Type":"ContainerStarted","Data":"36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1"} Nov 22 09:29:18 crc kubenswrapper[4858]: I1122 09:29:18.331183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6ad8195-9b93-4c3b-9142-1ec21a04e87b","Type":"ContainerStarted","Data":"fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67"} Nov 22 09:29:18 crc kubenswrapper[4858]: I1122 09:29:18.331199 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6ad8195-9b93-4c3b-9142-1ec21a04e87b","Type":"ContainerStarted","Data":"9bc7326c8d9c1dcb80e37482722e7180779f068f79e2b581782263879fe7fe27"} Nov 22 09:29:18 crc kubenswrapper[4858]: I1122 09:29:18.355476 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.355455363 podStartE2EDuration="2.355455363s" podCreationTimestamp="2025-11-22 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:18.346889449 +0000 UTC m=+8320.188312485" watchObservedRunningTime="2025-11-22 09:29:18.355455363 +0000 UTC m=+8320.196878369" Nov 22 09:29:24 crc kubenswrapper[4858]: I1122 09:29:24.507361 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:24 crc kubenswrapper[4858]: I1122 09:29:24.507714 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.416530 4858 generic.go:334] "Generic (PLEG): container finished" podID="2318e3d5-dca0-4623-9c71-a153ac1136c6" containerID="bf78a975e9e85cb4fd472be323893a4c865331c1c2fcc41e7dafb5a79c88dcfa" exitCode=137 Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.416765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2318e3d5-dca0-4623-9c71-a153ac1136c6","Type":"ContainerDied","Data":"bf78a975e9e85cb4fd472be323893a4c865331c1c2fcc41e7dafb5a79c88dcfa"} Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.595339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.659086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.833071 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-858td"] Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.863266 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.908595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72kf6\" (UniqueName: \"kubernetes.io/projected/2318e3d5-dca0-4623-9c71-a153ac1136c6-kube-api-access-72kf6\") pod \"2318e3d5-dca0-4623-9c71-a153ac1136c6\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.908642 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-combined-ca-bundle\") pod \"2318e3d5-dca0-4623-9c71-a153ac1136c6\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.908678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-config-data\") pod \"2318e3d5-dca0-4623-9c71-a153ac1136c6\" (UID: \"2318e3d5-dca0-4623-9c71-a153ac1136c6\") " Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.914897 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2318e3d5-dca0-4623-9c71-a153ac1136c6-kube-api-access-72kf6" (OuterVolumeSpecName: "kube-api-access-72kf6") pod "2318e3d5-dca0-4623-9c71-a153ac1136c6" (UID: "2318e3d5-dca0-4623-9c71-a153ac1136c6"). InnerVolumeSpecName "kube-api-access-72kf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.938235 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2318e3d5-dca0-4623-9c71-a153ac1136c6" (UID: "2318e3d5-dca0-4623-9c71-a153ac1136c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:26 crc kubenswrapper[4858]: I1122 09:29:26.939826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-config-data" (OuterVolumeSpecName: "config-data") pod "2318e3d5-dca0-4623-9c71-a153ac1136c6" (UID: "2318e3d5-dca0-4623-9c71-a153ac1136c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.012997 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72kf6\" (UniqueName: \"kubernetes.io/projected/2318e3d5-dca0-4623-9c71-a153ac1136c6-kube-api-access-72kf6\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.013037 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.013047 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2318e3d5-dca0-4623-9c71-a153ac1136c6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.040796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.040858 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.429201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2318e3d5-dca0-4623-9c71-a153ac1136c6","Type":"ContainerDied","Data":"a89c7b83c63ca1d37e17c70eea6a16ee3040401f0bf0e5295ba4c4dac87707ec"} Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.429232 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.429271 4858 scope.go:117] "RemoveContainer" containerID="bf78a975e9e85cb4fd472be323893a4c865331c1c2fcc41e7dafb5a79c88dcfa" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.487832 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.506509 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.515308 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:29:27 crc kubenswrapper[4858]: E1122 09:29:27.515909 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2318e3d5-dca0-4623-9c71-a153ac1136c6" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.515933 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2318e3d5-dca0-4623-9c71-a153ac1136c6" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 09:29:27 crc kubenswrapper[4858]: E1122 09:29:27.515967 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="extract-content" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.515977 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="extract-content" Nov 22 09:29:27 crc kubenswrapper[4858]: E1122 09:29:27.515999 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="extract-utilities" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.516010 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="extract-utilities" Nov 22 09:29:27 crc kubenswrapper[4858]: E1122 09:29:27.516030 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="registry-server" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.516039 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="registry-server" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.516275 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0085c7bc-c542-4c44-a178-2a22bfe4ac8e" containerName="registry-server" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.516299 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2318e3d5-dca0-4623-9c71-a153ac1136c6" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.517172 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.519048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.519558 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.519897 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.526023 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.572828 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2318e3d5-dca0-4623-9c71-a153ac1136c6" path="/var/lib/kubelet/pods/2318e3d5-dca0-4623-9c71-a153ac1136c6/volumes" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.627970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khv22\" (UniqueName: \"kubernetes.io/projected/cd68b47b-06e7-4e59-aad6-cae8c376573d-kube-api-access-khv22\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.628059 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.628084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.628180 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.628212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.729552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.730143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.730419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khv22\" (UniqueName: \"kubernetes.io/projected/cd68b47b-06e7-4e59-aad6-cae8c376573d-kube-api-access-khv22\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.730566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.730699 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.746215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.748054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.748702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khv22\" (UniqueName: \"kubernetes.io/projected/cd68b47b-06e7-4e59-aad6-cae8c376573d-kube-api-access-khv22\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.748765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.749607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:27 crc kubenswrapper[4858]: I1122 09:29:27.877883 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:28 crc kubenswrapper[4858]: I1122 09:29:28.125348 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.107:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:28 crc kubenswrapper[4858]: I1122 09:29:28.125741 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.107:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:28 crc kubenswrapper[4858]: W1122 09:29:28.325006 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd68b47b_06e7_4e59_aad6_cae8c376573d.slice/crio-7d8095ec08978ae0c1e4a93cc507fa71880078bd94fbd6699a7a7280ba982da7 WatchSource:0}: Error finding container 7d8095ec08978ae0c1e4a93cc507fa71880078bd94fbd6699a7a7280ba982da7: Status 404 returned error can't find the container with id 7d8095ec08978ae0c1e4a93cc507fa71880078bd94fbd6699a7a7280ba982da7 Nov 22 09:29:28 crc kubenswrapper[4858]: I1122 09:29:28.330163 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:29:28 crc kubenswrapper[4858]: I1122 09:29:28.440772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd68b47b-06e7-4e59-aad6-cae8c376573d","Type":"ContainerStarted","Data":"7d8095ec08978ae0c1e4a93cc507fa71880078bd94fbd6699a7a7280ba982da7"} Nov 22 09:29:28 crc kubenswrapper[4858]: I1122 09:29:28.440996 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-858td" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="registry-server" containerID="cri-o://a64434333a390bb571799f987682e4b16aecc0d7ccdb263e229d6a02273f9251" gracePeriod=2 Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.451754 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerID="a64434333a390bb571799f987682e4b16aecc0d7ccdb263e229d6a02273f9251" exitCode=0 Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.451814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerDied","Data":"a64434333a390bb571799f987682e4b16aecc0d7ccdb263e229d6a02273f9251"} Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.452139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858td" event={"ID":"2e47a092-2758-4294-82f3-6b7baf0fc912","Type":"ContainerDied","Data":"d9b65b59660c9e4d5147ffc5b08f5fb29c2c1a0e270155d7c35e26e613437f22"} Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.452155 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b65b59660c9e4d5147ffc5b08f5fb29c2c1a0e270155d7c35e26e613437f22" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.454200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd68b47b-06e7-4e59-aad6-cae8c376573d","Type":"ContainerStarted","Data":"8070c89d3808b68f0b98fb9cbd32312e22d937be61d9757f60eb633a06522feb"} Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.478666 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.4786264129999998 podStartE2EDuration="2.478626413s" podCreationTimestamp="2025-11-22 09:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:29.470896015 +0000 UTC m=+8331.312319041" watchObservedRunningTime="2025-11-22 09:29:29.478626413 +0000 UTC m=+8331.320049429" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.525214 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.665131 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-utilities\") pod \"2e47a092-2758-4294-82f3-6b7baf0fc912\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.665221 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-catalog-content\") pod \"2e47a092-2758-4294-82f3-6b7baf0fc912\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.665761 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9pkc\" (UniqueName: \"kubernetes.io/projected/2e47a092-2758-4294-82f3-6b7baf0fc912-kube-api-access-n9pkc\") pod \"2e47a092-2758-4294-82f3-6b7baf0fc912\" (UID: \"2e47a092-2758-4294-82f3-6b7baf0fc912\") " Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.666146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-utilities" (OuterVolumeSpecName: "utilities") pod "2e47a092-2758-4294-82f3-6b7baf0fc912" (UID: "2e47a092-2758-4294-82f3-6b7baf0fc912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.666474 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.672575 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e47a092-2758-4294-82f3-6b7baf0fc912-kube-api-access-n9pkc" (OuterVolumeSpecName: "kube-api-access-n9pkc") pod "2e47a092-2758-4294-82f3-6b7baf0fc912" (UID: "2e47a092-2758-4294-82f3-6b7baf0fc912"). InnerVolumeSpecName "kube-api-access-n9pkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.753407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e47a092-2758-4294-82f3-6b7baf0fc912" (UID: "2e47a092-2758-4294-82f3-6b7baf0fc912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.767749 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9pkc\" (UniqueName: \"kubernetes.io/projected/2e47a092-2758-4294-82f3-6b7baf0fc912-kube-api-access-n9pkc\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:29 crc kubenswrapper[4858]: I1122 09:29:29.767792 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e47a092-2758-4294-82f3-6b7baf0fc912-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.464145 4858 generic.go:334] "Generic (PLEG): container finished" podID="db757e00-4494-41fc-89da-db26b197f590" containerID="d7ac8ccbd118017a3374ab8213f0014483278d67a520faa1f45f2a7a093a9375" exitCode=137 Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.464231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db757e00-4494-41fc-89da-db26b197f590","Type":"ContainerDied","Data":"d7ac8ccbd118017a3374ab8213f0014483278d67a520faa1f45f2a7a093a9375"} Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.464539 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-858td" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.507083 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-858td"] Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.514657 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-858td"] Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.613860 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.690610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-combined-ca-bundle\") pod \"db757e00-4494-41fc-89da-db26b197f590\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.690686 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-config-data\") pod \"db757e00-4494-41fc-89da-db26b197f590\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.690874 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n59gg\" (UniqueName: \"kubernetes.io/projected/db757e00-4494-41fc-89da-db26b197f590-kube-api-access-n59gg\") pod \"db757e00-4494-41fc-89da-db26b197f590\" (UID: \"db757e00-4494-41fc-89da-db26b197f590\") " Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.695286 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db757e00-4494-41fc-89da-db26b197f590-kube-api-access-n59gg" (OuterVolumeSpecName: "kube-api-access-n59gg") pod "db757e00-4494-41fc-89da-db26b197f590" (UID: "db757e00-4494-41fc-89da-db26b197f590"). InnerVolumeSpecName "kube-api-access-n59gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.721448 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db757e00-4494-41fc-89da-db26b197f590" (UID: "db757e00-4494-41fc-89da-db26b197f590"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.722890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-config-data" (OuterVolumeSpecName: "config-data") pod "db757e00-4494-41fc-89da-db26b197f590" (UID: "db757e00-4494-41fc-89da-db26b197f590"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.793791 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.793826 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db757e00-4494-41fc-89da-db26b197f590-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:30 crc kubenswrapper[4858]: I1122 09:29:30.793839 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n59gg\" (UniqueName: \"kubernetes.io/projected/db757e00-4494-41fc-89da-db26b197f590-kube-api-access-n59gg\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.475498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db757e00-4494-41fc-89da-db26b197f590","Type":"ContainerDied","Data":"4b86b3213cc1ea465a15dafe425c6532294d948f3358b9bd009cc7db0ff2e2f2"} Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.475625 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.476486 4858 scope.go:117] "RemoveContainer" containerID="d7ac8ccbd118017a3374ab8213f0014483278d67a520faa1f45f2a7a093a9375" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.519013 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.577252 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" path="/var/lib/kubelet/pods/2e47a092-2758-4294-82f3-6b7baf0fc912/volumes" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.579083 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.579157 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:31 crc kubenswrapper[4858]: E1122 09:29:31.579812 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db757e00-4494-41fc-89da-db26b197f590" containerName="nova-scheduler-scheduler" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.579850 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="db757e00-4494-41fc-89da-db26b197f590" containerName="nova-scheduler-scheduler" Nov 22 09:29:31 crc kubenswrapper[4858]: E1122 09:29:31.579887 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="extract-utilities" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.579905 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="extract-utilities" Nov 22 09:29:31 crc kubenswrapper[4858]: E1122 09:29:31.579928 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="extract-content" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.579944 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="extract-content" Nov 22 09:29:31 crc kubenswrapper[4858]: E1122 09:29:31.580009 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="registry-server" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.580027 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="registry-server" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.581012 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e47a092-2758-4294-82f3-6b7baf0fc912" containerName="registry-server" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.581128 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="db757e00-4494-41fc-89da-db26b197f590" containerName="nova-scheduler-scheduler" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.582541 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.582694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.584673 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.710068 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l827q\" (UniqueName: \"kubernetes.io/projected/92de51b5-1e92-49e4-942e-7d9be11a4bef-kube-api-access-l827q\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.710496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-config-data\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.710869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.812789 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.813186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l827q\" (UniqueName: \"kubernetes.io/projected/92de51b5-1e92-49e4-942e-7d9be11a4bef-kube-api-access-l827q\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.813234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-config-data\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.818176 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-config-data\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.818551 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.834211 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l827q\" (UniqueName: \"kubernetes.io/projected/92de51b5-1e92-49e4-942e-7d9be11a4bef-kube-api-access-l827q\") pod \"nova-scheduler-0\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " pod="openstack/nova-scheduler-0" Nov 22 09:29:31 crc kubenswrapper[4858]: I1122 09:29:31.907259 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:29:32 crc kubenswrapper[4858]: I1122 09:29:32.167488 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:32 crc kubenswrapper[4858]: W1122 09:29:32.173673 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92de51b5_1e92_49e4_942e_7d9be11a4bef.slice/crio-233b1b858f8d4bc9eed63753c5c8bb2e193eb4649b032db0f17b79bd576031f8 WatchSource:0}: Error finding container 233b1b858f8d4bc9eed63753c5c8bb2e193eb4649b032db0f17b79bd576031f8: Status 404 returned error can't find the container with id 233b1b858f8d4bc9eed63753c5c8bb2e193eb4649b032db0f17b79bd576031f8 Nov 22 09:29:32 crc kubenswrapper[4858]: I1122 09:29:32.487465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92de51b5-1e92-49e4-942e-7d9be11a4bef","Type":"ContainerStarted","Data":"f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d"} Nov 22 09:29:32 crc kubenswrapper[4858]: I1122 09:29:32.487844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92de51b5-1e92-49e4-942e-7d9be11a4bef","Type":"ContainerStarted","Data":"233b1b858f8d4bc9eed63753c5c8bb2e193eb4649b032db0f17b79bd576031f8"} Nov 22 09:29:32 crc kubenswrapper[4858]: I1122 09:29:32.513394 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.513372562 podStartE2EDuration="1.513372562s" podCreationTimestamp="2025-11-22 09:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:32.50670971 +0000 UTC m=+8334.348132766" watchObservedRunningTime="2025-11-22 09:29:32.513372562 +0000 UTC m=+8334.354795588" Nov 22 09:29:32 crc kubenswrapper[4858]: I1122 09:29:32.878943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:33 crc kubenswrapper[4858]: I1122 09:29:33.548375 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db757e00-4494-41fc-89da-db26b197f590" path="/var/lib/kubelet/pods/db757e00-4494-41fc-89da-db26b197f590/volumes" Nov 22 09:29:34 crc kubenswrapper[4858]: I1122 09:29:34.495532 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:34 crc kubenswrapper[4858]: I1122 09:29:34.498487 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:35 crc kubenswrapper[4858]: I1122 09:29:35.091581 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8k55m"] Nov 22 09:29:35 crc kubenswrapper[4858]: I1122 09:29:35.101836 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9b4b-account-create-zc56d"] Nov 22 09:29:35 crc kubenswrapper[4858]: I1122 09:29:35.133965 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9b4b-account-create-zc56d"] Nov 22 09:29:35 crc kubenswrapper[4858]: I1122 09:29:35.152606 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8k55m"] Nov 22 09:29:35 crc kubenswrapper[4858]: I1122 09:29:35.551802 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855716ba-1fbf-4f5f-9f67-0a80465ebe0a" path="/var/lib/kubelet/pods/855716ba-1fbf-4f5f-9f67-0a80465ebe0a/volumes" Nov 22 09:29:35 crc kubenswrapper[4858]: I1122 09:29:35.553164 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba61de2-085b-4ec5-ab7a-08e789be3bfc" path="/var/lib/kubelet/pods/cba61de2-085b-4ec5-ab7a-08e789be3bfc/volumes" Nov 22 09:29:36 crc kubenswrapper[4858]: I1122 09:29:36.908387 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 09:29:37 crc kubenswrapper[4858]: I1122 09:29:37.079835 4858 scope.go:117] "RemoveContainer" containerID="af372afad967b23f67cf29821ca97e5317e9a9c4df206a68425c39a7708ca250" Nov 22 09:29:37 crc kubenswrapper[4858]: I1122 09:29:37.120974 4858 scope.go:117] "RemoveContainer" containerID="b2a538c5e5d4be370d699ca1781b6a69ff6d406f53969c96747dfc641f02d840" Nov 22 09:29:37 crc kubenswrapper[4858]: I1122 09:29:37.878486 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:37 crc kubenswrapper[4858]: I1122 09:29:37.931142 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.123525 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.107:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.123552 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.107:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.567592 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.730363 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-b7sct"] Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.731618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.734269 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.734308 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.743128 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7sct"] Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.780819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-config-data\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.780907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5k6\" (UniqueName: \"kubernetes.io/projected/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-kube-api-access-sp5k6\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.781000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.781092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-scripts\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.882422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.882738 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-scripts\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.882863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-config-data\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.882891 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp5k6\" (UniqueName: \"kubernetes.io/projected/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-kube-api-access-sp5k6\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.889153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.889658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-config-data\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.898551 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp5k6\" (UniqueName: \"kubernetes.io/projected/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-kube-api-access-sp5k6\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:38 crc kubenswrapper[4858]: I1122 09:29:38.902851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-scripts\") pod \"nova-cell1-cell-mapping-b7sct\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:39 crc kubenswrapper[4858]: I1122 09:29:39.054975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:39 crc kubenswrapper[4858]: I1122 09:29:39.570907 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7sct"] Nov 22 09:29:39 crc kubenswrapper[4858]: W1122 09:29:39.571097 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd03c4f4_cb47_4f93_a0f9_01ba93c3ecb0.slice/crio-7323e78cbdd4474fc2428ad5847a4269c20a37ed6f9123d0faae278a86c969c2 WatchSource:0}: Error finding container 7323e78cbdd4474fc2428ad5847a4269c20a37ed6f9123d0faae278a86c969c2: Status 404 returned error can't find the container with id 7323e78cbdd4474fc2428ad5847a4269c20a37ed6f9123d0faae278a86c969c2 Nov 22 09:29:40 crc kubenswrapper[4858]: I1122 09:29:40.576818 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7sct" event={"ID":"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0","Type":"ContainerStarted","Data":"e61d56cee7da5d021ef44da5aac6e5c36bb8477f6f981ef9461bdbf5be01bb27"} Nov 22 09:29:40 crc kubenswrapper[4858]: I1122 09:29:40.578429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7sct" event={"ID":"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0","Type":"ContainerStarted","Data":"7323e78cbdd4474fc2428ad5847a4269c20a37ed6f9123d0faae278a86c969c2"} Nov 22 09:29:40 crc kubenswrapper[4858]: I1122 09:29:40.598133 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-b7sct" podStartSLOduration=2.598111356 podStartE2EDuration="2.598111356s" podCreationTimestamp="2025-11-22 09:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:29:40.597517917 +0000 UTC m=+8342.438940923" watchObservedRunningTime="2025-11-22 09:29:40.598111356 +0000 UTC m=+8342.439534382" Nov 22 09:29:41 crc kubenswrapper[4858]: I1122 09:29:41.908028 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 09:29:41 crc kubenswrapper[4858]: I1122 09:29:41.941680 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 09:29:42 crc kubenswrapper[4858]: I1122 09:29:42.632708 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 09:29:44 crc kubenswrapper[4858]: I1122 09:29:44.495561 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:44 crc kubenswrapper[4858]: I1122 09:29:44.495765 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.104:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:29:44 crc kubenswrapper[4858]: I1122 09:29:44.618869 4858 generic.go:334] "Generic (PLEG): container finished" podID="dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" containerID="e61d56cee7da5d021ef44da5aac6e5c36bb8477f6f981ef9461bdbf5be01bb27" exitCode=0 Nov 22 09:29:44 crc kubenswrapper[4858]: I1122 09:29:44.618921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7sct" event={"ID":"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0","Type":"ContainerDied","Data":"e61d56cee7da5d021ef44da5aac6e5c36bb8477f6f981ef9461bdbf5be01bb27"} Nov 22 09:29:45 crc kubenswrapper[4858]: I1122 09:29:45.895520 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.026062 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-scripts\") pod \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.026135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5k6\" (UniqueName: \"kubernetes.io/projected/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-kube-api-access-sp5k6\") pod \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.026400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-config-data\") pod \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.026504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-combined-ca-bundle\") pod \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\" (UID: \"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0\") " Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.032678 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-kube-api-access-sp5k6" (OuterVolumeSpecName: "kube-api-access-sp5k6") pod "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" (UID: "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0"). InnerVolumeSpecName "kube-api-access-sp5k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.034628 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-scripts" (OuterVolumeSpecName: "scripts") pod "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" (UID: "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.064072 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6nh8p"] Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.076286 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6nh8p"] Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.087590 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" (UID: "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.090957 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-config-data" (OuterVolumeSpecName: "config-data") pod "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" (UID: "dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.129677 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.129708 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.129718 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.129727 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp5k6\" (UniqueName: \"kubernetes.io/projected/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0-kube-api-access-sp5k6\") on node \"crc\" DevicePath \"\"" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.635155 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7sct" event={"ID":"dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0","Type":"ContainerDied","Data":"7323e78cbdd4474fc2428ad5847a4269c20a37ed6f9123d0faae278a86c969c2"} Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.635199 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7323e78cbdd4474fc2428ad5847a4269c20a37ed6f9123d0faae278a86c969c2" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.635208 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7sct" Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.811965 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.812518 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-log" containerID="cri-o://fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67" gracePeriod=30 Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.812599 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-api" containerID="cri-o://36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1" gracePeriod=30 Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.826194 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.826628 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" containerID="cri-o://f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" gracePeriod=30 Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.842856 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.843115 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" containerID="cri-o://a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79" gracePeriod=30 Nov 22 09:29:46 crc kubenswrapper[4858]: I1122 09:29:46.843205 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" containerID="cri-o://19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8" gracePeriod=30 Nov 22 09:29:46 crc kubenswrapper[4858]: E1122 09:29:46.909972 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:46 crc kubenswrapper[4858]: E1122 09:29:46.912702 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:46 crc kubenswrapper[4858]: E1122 09:29:46.913896 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:46 crc kubenswrapper[4858]: E1122 09:29:46.913923 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.039952 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.040224 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.545445 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f800d9-b999-46fe-b9ab-4bac8356fcdd" path="/var/lib/kubelet/pods/d7f800d9-b999-46fe-b9ab-4bac8356fcdd/volumes" Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.646120 4858 generic.go:334] "Generic (PLEG): container finished" podID="5d875d57-7b5c-405b-a183-3cad85f16980" containerID="a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79" exitCode=143 Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.646186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d875d57-7b5c-405b-a183-3cad85f16980","Type":"ContainerDied","Data":"a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79"} Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.648143 4858 generic.go:334] "Generic (PLEG): container finished" podID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerID="fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67" exitCode=143 Nov 22 09:29:47 crc kubenswrapper[4858]: I1122 09:29:47.648170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6ad8195-9b93-4c3b-9142-1ec21a04e87b","Type":"ContainerDied","Data":"fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67"} Nov 22 09:29:51 crc kubenswrapper[4858]: E1122 09:29:51.910235 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:51 crc kubenswrapper[4858]: E1122 09:29:51.913148 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:51 crc kubenswrapper[4858]: E1122 09:29:51.915551 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:51 crc kubenswrapper[4858]: E1122 09:29:51.915662 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:29:56 crc kubenswrapper[4858]: E1122 09:29:56.909693 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:56 crc kubenswrapper[4858]: E1122 09:29:56.911943 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:56 crc kubenswrapper[4858]: E1122 09:29:56.914665 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:29:56 crc kubenswrapper[4858]: E1122 09:29:56.914723 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:29:59 crc kubenswrapper[4858]: I1122 09:29:59.035434 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gmjl5"] Nov 22 09:29:59 crc kubenswrapper[4858]: I1122 09:29:59.045741 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gmjl5"] Nov 22 09:29:59 crc kubenswrapper[4858]: I1122 09:29:59.572278 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18753f8d-6d51-430b-aa62-f9ee41cf917c" path="/var/lib/kubelet/pods/18753f8d-6d51-430b-aa62-f9ee41cf917c/volumes" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.148274 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7"] Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.149060 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" containerName="nova-manage" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.149077 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" containerName="nova-manage" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.149377 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" containerName="nova-manage" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.149988 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.156983 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.157161 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.179392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.311801 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616a4853-0a12-4803-97bc-d871c8aec81e-config-volume\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.311928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616a4853-0a12-4803-97bc-d871c8aec81e-secret-volume\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.312107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfkh\" (UniqueName: \"kubernetes.io/projected/616a4853-0a12-4803-97bc-d871c8aec81e-kube-api-access-6bfkh\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.414496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616a4853-0a12-4803-97bc-d871c8aec81e-config-volume\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.414627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616a4853-0a12-4803-97bc-d871c8aec81e-secret-volume\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.414705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfkh\" (UniqueName: \"kubernetes.io/projected/616a4853-0a12-4803-97bc-d871c8aec81e-kube-api-access-6bfkh\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.415809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616a4853-0a12-4803-97bc-d871c8aec81e-config-volume\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.421114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616a4853-0a12-4803-97bc-d871c8aec81e-secret-volume\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.452297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfkh\" (UniqueName: \"kubernetes.io/projected/616a4853-0a12-4803-97bc-d871c8aec81e-kube-api-access-6bfkh\") pod \"collect-profiles-29396730-kr6d7\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.481611 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.609392 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.611911 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.719633 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mns6q\" (UniqueName: \"kubernetes.io/projected/5d875d57-7b5c-405b-a183-3cad85f16980-kube-api-access-mns6q\") pod \"5d875d57-7b5c-405b-a183-3cad85f16980\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.719683 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6k5f\" (UniqueName: \"kubernetes.io/projected/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-kube-api-access-w6k5f\") pod \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.719713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d875d57-7b5c-405b-a183-3cad85f16980-logs\") pod \"5d875d57-7b5c-405b-a183-3cad85f16980\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.719799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-combined-ca-bundle\") pod \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-config-data\") pod \"5d875d57-7b5c-405b-a183-3cad85f16980\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720408 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-config-data\") pod \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-nova-metadata-tls-certs\") pod \"5d875d57-7b5c-405b-a183-3cad85f16980\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-logs\") pod \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\" (UID: \"f6ad8195-9b93-4c3b-9142-1ec21a04e87b\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d875d57-7b5c-405b-a183-3cad85f16980-logs" (OuterVolumeSpecName: "logs") pod "5d875d57-7b5c-405b-a183-3cad85f16980" (UID: "5d875d57-7b5c-405b-a183-3cad85f16980"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-combined-ca-bundle\") pod \"5d875d57-7b5c-405b-a183-3cad85f16980\" (UID: \"5d875d57-7b5c-405b-a183-3cad85f16980\") " Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.720969 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d875d57-7b5c-405b-a183-3cad85f16980-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.723618 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-logs" (OuterVolumeSpecName: "logs") pod "f6ad8195-9b93-4c3b-9142-1ec21a04e87b" (UID: "f6ad8195-9b93-4c3b-9142-1ec21a04e87b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.724175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d875d57-7b5c-405b-a183-3cad85f16980-kube-api-access-mns6q" (OuterVolumeSpecName: "kube-api-access-mns6q") pod "5d875d57-7b5c-405b-a183-3cad85f16980" (UID: "5d875d57-7b5c-405b-a183-3cad85f16980"). InnerVolumeSpecName "kube-api-access-mns6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.724234 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-kube-api-access-w6k5f" (OuterVolumeSpecName: "kube-api-access-w6k5f") pod "f6ad8195-9b93-4c3b-9142-1ec21a04e87b" (UID: "f6ad8195-9b93-4c3b-9142-1ec21a04e87b"). InnerVolumeSpecName "kube-api-access-w6k5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.745718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d875d57-7b5c-405b-a183-3cad85f16980" (UID: "5d875d57-7b5c-405b-a183-3cad85f16980"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.745719 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-config-data" (OuterVolumeSpecName: "config-data") pod "f6ad8195-9b93-4c3b-9142-1ec21a04e87b" (UID: "f6ad8195-9b93-4c3b-9142-1ec21a04e87b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.749792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-config-data" (OuterVolumeSpecName: "config-data") pod "5d875d57-7b5c-405b-a183-3cad85f16980" (UID: "5d875d57-7b5c-405b-a183-3cad85f16980"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.768825 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6ad8195-9b93-4c3b-9142-1ec21a04e87b" (UID: "f6ad8195-9b93-4c3b-9142-1ec21a04e87b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.771667 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5d875d57-7b5c-405b-a183-3cad85f16980" (UID: "5d875d57-7b5c-405b-a183-3cad85f16980"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.783903 4858 generic.go:334] "Generic (PLEG): container finished" podID="5d875d57-7b5c-405b-a183-3cad85f16980" containerID="19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8" exitCode=0 Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.783956 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.783992 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d875d57-7b5c-405b-a183-3cad85f16980","Type":"ContainerDied","Data":"19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8"} Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.784032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5d875d57-7b5c-405b-a183-3cad85f16980","Type":"ContainerDied","Data":"1bf73107ce27688f26a7ac3927456341d5a4a44c2abc7e3558acb61a984df6f0"} Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.784055 4858 scope.go:117] "RemoveContainer" containerID="19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.787155 4858 generic.go:334] "Generic (PLEG): container finished" podID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerID="36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1" exitCode=0 Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.787194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6ad8195-9b93-4c3b-9142-1ec21a04e87b","Type":"ContainerDied","Data":"36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1"} Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.787237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6ad8195-9b93-4c3b-9142-1ec21a04e87b","Type":"ContainerDied","Data":"9bc7326c8d9c1dcb80e37482722e7180779f068f79e2b581782263879fe7fe27"} Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.787240 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.814043 4858 scope.go:117] "RemoveContainer" containerID="a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.823991 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824449 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824488 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824508 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mns6q\" (UniqueName: \"kubernetes.io/projected/5d875d57-7b5c-405b-a183-3cad85f16980-kube-api-access-mns6q\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824523 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6k5f\" (UniqueName: \"kubernetes.io/projected/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-kube-api-access-w6k5f\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824536 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824549 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d875d57-7b5c-405b-a183-3cad85f16980-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.824562 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ad8195-9b93-4c3b-9142-1ec21a04e87b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.846302 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.849624 4858 scope.go:117] "RemoveContainer" containerID="19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8" Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.850383 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8\": container with ID starting with 19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8 not found: ID does not exist" containerID="19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.850419 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8"} err="failed to get container status \"19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8\": rpc error: code = NotFound desc = could not find container \"19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8\": container with ID starting with 19c3961d23e3ba5b0c2e1c4d33f16d1e43e3c8634f7963e12dfe47cc8801bbb8 not found: ID does not exist" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.850446 4858 scope.go:117] "RemoveContainer" containerID="a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79" Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.850830 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79\": container with ID starting with a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79 not found: ID does not exist" containerID="a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.850876 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79"} err="failed to get container status \"a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79\": rpc error: code = NotFound desc = could not find container \"a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79\": container with ID starting with a75ea54045798eb94b647602c2a9b0994f730fc24e45ae443c20e1135ea2de79 not found: ID does not exist" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.850896 4858 scope.go:117] "RemoveContainer" containerID="36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.870798 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.881183 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.897531 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.906590 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.907395 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-log" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907417 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-log" Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.907435 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-api" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907443 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-api" Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.907467 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907475 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.907507 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907516 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907820 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-log" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907844 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-log" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907858 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" containerName="nova-api-api" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.907876 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" containerName="nova-metadata-metadata" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.909233 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.910511 4858 scope.go:117] "RemoveContainer" containerID="fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.911674 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.917696 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.929813 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.931363 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.934068 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.934284 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.938700 4858 scope.go:117] "RemoveContainer" containerID="36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.938945 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.939062 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1\": container with ID starting with 36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1 not found: ID does not exist" containerID="36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.939090 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1"} err="failed to get container status \"36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1\": rpc error: code = NotFound desc = could not find container \"36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1\": container with ID starting with 36544662cf50d7dc73397521745fe37203766c9413c9218b73fd3dae7e6beca1 not found: ID does not exist" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.939110 4858 scope.go:117] "RemoveContainer" containerID="fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67" Nov 22 09:30:00 crc kubenswrapper[4858]: E1122 09:30:00.939424 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67\": container with ID starting with fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67 not found: ID does not exist" containerID="fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.939446 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67"} err="failed to get container status \"fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67\": rpc error: code = NotFound desc = could not find container \"fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67\": container with ID starting with fa980e8ee2b9415393d3c0d9cf8fa50290de8a6ec93deaffe8a1c2b1e62b5f67 not found: ID does not exist" Nov 22 09:30:00 crc kubenswrapper[4858]: I1122 09:30:00.970109 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7"] Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-config-data\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqn9n\" (UniqueName: \"kubernetes.io/projected/1e952720-9083-48e0-96d1-54f1cfacfbf9-kube-api-access-tqn9n\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-config-data\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e952720-9083-48e0-96d1-54f1cfacfbf9-logs\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d82031e8-bb5f-4ee0-874b-277185e0421c-logs\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zq2\" (UniqueName: \"kubernetes.io/projected/d82031e8-bb5f-4ee0-874b-277185e0421c-kube-api-access-v9zq2\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031547 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.031600 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133286 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e952720-9083-48e0-96d1-54f1cfacfbf9-logs\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d82031e8-bb5f-4ee0-874b-277185e0421c-logs\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zq2\" (UniqueName: \"kubernetes.io/projected/d82031e8-bb5f-4ee0-874b-277185e0421c-kube-api-access-v9zq2\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133418 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133463 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-config-data\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqn9n\" (UniqueName: \"kubernetes.io/projected/1e952720-9083-48e0-96d1-54f1cfacfbf9-kube-api-access-tqn9n\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-config-data\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.133884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e952720-9083-48e0-96d1-54f1cfacfbf9-logs\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.134018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d82031e8-bb5f-4ee0-874b-277185e0421c-logs\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.138177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.138263 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-config-data\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.138798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.138873 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-config-data\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.139024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.150098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqn9n\" (UniqueName: \"kubernetes.io/projected/1e952720-9083-48e0-96d1-54f1cfacfbf9-kube-api-access-tqn9n\") pod \"nova-metadata-0\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.152444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zq2\" (UniqueName: \"kubernetes.io/projected/d82031e8-bb5f-4ee0-874b-277185e0421c-kube-api-access-v9zq2\") pod \"nova-api-0\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.231078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.254617 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.548869 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d875d57-7b5c-405b-a183-3cad85f16980" path="/var/lib/kubelet/pods/5d875d57-7b5c-405b-a183-3cad85f16980/volumes" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.549874 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ad8195-9b93-4c3b-9142-1ec21a04e87b" path="/var/lib/kubelet/pods/f6ad8195-9b93-4c3b-9142-1ec21a04e87b/volumes" Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.694109 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.704085 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:01 crc kubenswrapper[4858]: W1122 09:30:01.704250 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e952720_9083_48e0_96d1_54f1cfacfbf9.slice/crio-be6d76c6e63022d4269f76f4f45382ba6d9a577eda40fb9cbeffb17a7e09c94a WatchSource:0}: Error finding container be6d76c6e63022d4269f76f4f45382ba6d9a577eda40fb9cbeffb17a7e09c94a: Status 404 returned error can't find the container with id be6d76c6e63022d4269f76f4f45382ba6d9a577eda40fb9cbeffb17a7e09c94a Nov 22 09:30:01 crc kubenswrapper[4858]: W1122 09:30:01.704503 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd82031e8_bb5f_4ee0_874b_277185e0421c.slice/crio-6d7e27552151205e789943c5a3e4988d231bc486919a3360a0cb3a5c0ced8f50 WatchSource:0}: Error finding container 6d7e27552151205e789943c5a3e4988d231bc486919a3360a0cb3a5c0ced8f50: Status 404 returned error can't find the container with id 6d7e27552151205e789943c5a3e4988d231bc486919a3360a0cb3a5c0ced8f50 Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.799121 4858 generic.go:334] "Generic (PLEG): container finished" podID="616a4853-0a12-4803-97bc-d871c8aec81e" containerID="f2d05d6bfb2298a1a7a2493586b0ac17eadcaf0a709ee67d69e9e083ba6ff74b" exitCode=0 Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.799259 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" event={"ID":"616a4853-0a12-4803-97bc-d871c8aec81e","Type":"ContainerDied","Data":"f2d05d6bfb2298a1a7a2493586b0ac17eadcaf0a709ee67d69e9e083ba6ff74b"} Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.799376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" event={"ID":"616a4853-0a12-4803-97bc-d871c8aec81e","Type":"ContainerStarted","Data":"a25d305f918c6d5c4770bc756666b2c65789f74f748c2050ebfd1dc16551e15a"} Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.801929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d82031e8-bb5f-4ee0-874b-277185e0421c","Type":"ContainerStarted","Data":"6d7e27552151205e789943c5a3e4988d231bc486919a3360a0cb3a5c0ced8f50"} Nov 22 09:30:01 crc kubenswrapper[4858]: I1122 09:30:01.803434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e952720-9083-48e0-96d1-54f1cfacfbf9","Type":"ContainerStarted","Data":"be6d76c6e63022d4269f76f4f45382ba6d9a577eda40fb9cbeffb17a7e09c94a"} Nov 22 09:30:01 crc kubenswrapper[4858]: E1122 09:30:01.909412 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:01 crc kubenswrapper[4858]: E1122 09:30:01.910972 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:01 crc kubenswrapper[4858]: E1122 09:30:01.912550 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:01 crc kubenswrapper[4858]: E1122 09:30:01.912633 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:02 crc kubenswrapper[4858]: I1122 09:30:02.817727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d82031e8-bb5f-4ee0-874b-277185e0421c","Type":"ContainerStarted","Data":"6e096e4a2ae7155d6d8c46d23f8467defc40e77a3fc4dbf0e562a5a006031139"} Nov 22 09:30:02 crc kubenswrapper[4858]: I1122 09:30:02.817805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d82031e8-bb5f-4ee0-874b-277185e0421c","Type":"ContainerStarted","Data":"ef464a3f7686d41a4099830d4c38a9d573796b3dd992d56d2f045b8d89ef85d0"} Nov 22 09:30:02 crc kubenswrapper[4858]: I1122 09:30:02.823480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e952720-9083-48e0-96d1-54f1cfacfbf9","Type":"ContainerStarted","Data":"2a569e7aef5c1478654a43f23ff834b089ab7b81d90062f8bc434d0602c00539"} Nov 22 09:30:02 crc kubenswrapper[4858]: I1122 09:30:02.823662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e952720-9083-48e0-96d1-54f1cfacfbf9","Type":"ContainerStarted","Data":"ffde0c5535e5575efcd312c44becdc816a46ec2830edcdf8c7cac194047d0a3d"} Nov 22 09:30:02 crc kubenswrapper[4858]: I1122 09:30:02.841295 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.841263124 podStartE2EDuration="2.841263124s" podCreationTimestamp="2025-11-22 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:30:02.840882393 +0000 UTC m=+8364.682305429" watchObservedRunningTime="2025-11-22 09:30:02.841263124 +0000 UTC m=+8364.682686150" Nov 22 09:30:02 crc kubenswrapper[4858]: I1122 09:30:02.868451 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.868332281 podStartE2EDuration="2.868332281s" podCreationTimestamp="2025-11-22 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:30:02.867801014 +0000 UTC m=+8364.709224040" watchObservedRunningTime="2025-11-22 09:30:02.868332281 +0000 UTC m=+8364.709755287" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.182475 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.275097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616a4853-0a12-4803-97bc-d871c8aec81e-config-volume\") pod \"616a4853-0a12-4803-97bc-d871c8aec81e\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.275184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bfkh\" (UniqueName: \"kubernetes.io/projected/616a4853-0a12-4803-97bc-d871c8aec81e-kube-api-access-6bfkh\") pod \"616a4853-0a12-4803-97bc-d871c8aec81e\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.275216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616a4853-0a12-4803-97bc-d871c8aec81e-secret-volume\") pod \"616a4853-0a12-4803-97bc-d871c8aec81e\" (UID: \"616a4853-0a12-4803-97bc-d871c8aec81e\") " Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.276036 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616a4853-0a12-4803-97bc-d871c8aec81e-config-volume" (OuterVolumeSpecName: "config-volume") pod "616a4853-0a12-4803-97bc-d871c8aec81e" (UID: "616a4853-0a12-4803-97bc-d871c8aec81e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.280687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616a4853-0a12-4803-97bc-d871c8aec81e-kube-api-access-6bfkh" (OuterVolumeSpecName: "kube-api-access-6bfkh") pod "616a4853-0a12-4803-97bc-d871c8aec81e" (UID: "616a4853-0a12-4803-97bc-d871c8aec81e"). InnerVolumeSpecName "kube-api-access-6bfkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.280715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616a4853-0a12-4803-97bc-d871c8aec81e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "616a4853-0a12-4803-97bc-d871c8aec81e" (UID: "616a4853-0a12-4803-97bc-d871c8aec81e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.377370 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/616a4853-0a12-4803-97bc-d871c8aec81e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.377414 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bfkh\" (UniqueName: \"kubernetes.io/projected/616a4853-0a12-4803-97bc-d871c8aec81e-kube-api-access-6bfkh\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.377428 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/616a4853-0a12-4803-97bc-d871c8aec81e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.836475 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" event={"ID":"616a4853-0a12-4803-97bc-d871c8aec81e","Type":"ContainerDied","Data":"a25d305f918c6d5c4770bc756666b2c65789f74f748c2050ebfd1dc16551e15a"} Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.836786 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a25d305f918c6d5c4770bc756666b2c65789f74f748c2050ebfd1dc16551e15a" Nov 22 09:30:03 crc kubenswrapper[4858]: I1122 09:30:03.836686 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-kr6d7" Nov 22 09:30:04 crc kubenswrapper[4858]: I1122 09:30:04.249439 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm"] Nov 22 09:30:04 crc kubenswrapper[4858]: I1122 09:30:04.263068 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-kqzcm"] Nov 22 09:30:05 crc kubenswrapper[4858]: I1122 09:30:05.548331 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dd9777-beef-4e69-89b5-19901541212d" path="/var/lib/kubelet/pods/08dd9777-beef-4e69-89b5-19901541212d/volumes" Nov 22 09:30:06 crc kubenswrapper[4858]: I1122 09:30:06.254819 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 09:30:06 crc kubenswrapper[4858]: I1122 09:30:06.254871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 09:30:06 crc kubenswrapper[4858]: E1122 09:30:06.909572 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:06 crc kubenswrapper[4858]: E1122 09:30:06.911802 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:06 crc kubenswrapper[4858]: E1122 09:30:06.913930 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:06 crc kubenswrapper[4858]: E1122 09:30:06.913980 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:11 crc kubenswrapper[4858]: I1122 09:30:11.231968 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:30:11 crc kubenswrapper[4858]: I1122 09:30:11.232700 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:30:11 crc kubenswrapper[4858]: I1122 09:30:11.255392 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 09:30:11 crc kubenswrapper[4858]: I1122 09:30:11.255503 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 09:30:11 crc kubenswrapper[4858]: E1122 09:30:11.909900 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:11 crc kubenswrapper[4858]: E1122 09:30:11.910902 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:11 crc kubenswrapper[4858]: E1122 09:30:11.912478 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:11 crc kubenswrapper[4858]: E1122 09:30:11.912521 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:12 crc kubenswrapper[4858]: I1122 09:30:12.325652 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.112:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:30:12 crc kubenswrapper[4858]: I1122 09:30:12.325705 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.113:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:30:12 crc kubenswrapper[4858]: I1122 09:30:12.326039 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.112:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 09:30:12 crc kubenswrapper[4858]: I1122 09:30:12.326133 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.113:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:30:16 crc kubenswrapper[4858]: E1122 09:30:16.908888 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d is running failed: container process not found" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:16 crc kubenswrapper[4858]: E1122 09:30:16.909822 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d is running failed: container process not found" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:16 crc kubenswrapper[4858]: E1122 09:30:16.910210 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d is running failed: container process not found" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:30:16 crc kubenswrapper[4858]: E1122 09:30:16.910258 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:16 crc kubenswrapper[4858]: I1122 09:30:16.999567 4858 generic.go:334] "Generic (PLEG): container finished" podID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" exitCode=137 Nov 22 09:30:16 crc kubenswrapper[4858]: I1122 09:30:16.999677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92de51b5-1e92-49e4-942e-7d9be11a4bef","Type":"ContainerDied","Data":"f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d"} Nov 22 09:30:17 crc kubenswrapper[4858]: I1122 09:30:17.908965 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:30:17 crc kubenswrapper[4858]: I1122 09:30:17.976086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-combined-ca-bundle\") pod \"92de51b5-1e92-49e4-942e-7d9be11a4bef\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " Nov 22 09:30:17 crc kubenswrapper[4858]: I1122 09:30:17.976151 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l827q\" (UniqueName: \"kubernetes.io/projected/92de51b5-1e92-49e4-942e-7d9be11a4bef-kube-api-access-l827q\") pod \"92de51b5-1e92-49e4-942e-7d9be11a4bef\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " Nov 22 09:30:17 crc kubenswrapper[4858]: I1122 09:30:17.976378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-config-data\") pod \"92de51b5-1e92-49e4-942e-7d9be11a4bef\" (UID: \"92de51b5-1e92-49e4-942e-7d9be11a4bef\") " Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.005383 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92de51b5-1e92-49e4-942e-7d9be11a4bef-kube-api-access-l827q" (OuterVolumeSpecName: "kube-api-access-l827q") pod "92de51b5-1e92-49e4-942e-7d9be11a4bef" (UID: "92de51b5-1e92-49e4-942e-7d9be11a4bef"). InnerVolumeSpecName "kube-api-access-l827q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.010813 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-config-data" (OuterVolumeSpecName: "config-data") pod "92de51b5-1e92-49e4-942e-7d9be11a4bef" (UID: "92de51b5-1e92-49e4-942e-7d9be11a4bef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.012822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92de51b5-1e92-49e4-942e-7d9be11a4bef","Type":"ContainerDied","Data":"233b1b858f8d4bc9eed63753c5c8bb2e193eb4649b032db0f17b79bd576031f8"} Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.012871 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.012904 4858 scope.go:117] "RemoveContainer" containerID="f577ebddeb4982b0925b7485fddbf6043f6d22bbc10a0a86462640049fa3176d" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.021201 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92de51b5-1e92-49e4-942e-7d9be11a4bef" (UID: "92de51b5-1e92-49e4-942e-7d9be11a4bef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.078043 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.078280 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l827q\" (UniqueName: \"kubernetes.io/projected/92de51b5-1e92-49e4-942e-7d9be11a4bef-kube-api-access-l827q\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.078385 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92de51b5-1e92-49e4-942e-7d9be11a4bef-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.342943 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.351836 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.361332 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:30:18 crc kubenswrapper[4858]: E1122 09:30:18.361991 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.362079 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:18 crc kubenswrapper[4858]: E1122 09:30:18.362174 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616a4853-0a12-4803-97bc-d871c8aec81e" containerName="collect-profiles" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.362245 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="616a4853-0a12-4803-97bc-d871c8aec81e" containerName="collect-profiles" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.363138 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" containerName="nova-scheduler-scheduler" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.363253 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="616a4853-0a12-4803-97bc-d871c8aec81e" containerName="collect-profiles" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.364153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.367263 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.371286 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.381871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.381922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-config-data\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.381962 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rnw\" (UniqueName: \"kubernetes.io/projected/419367a7-1838-4692-b6fc-f266985765d7-kube-api-access-m2rnw\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.483486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.483554 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-config-data\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.484206 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rnw\" (UniqueName: \"kubernetes.io/projected/419367a7-1838-4692-b6fc-f266985765d7-kube-api-access-m2rnw\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.489807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.498552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-config-data\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.505407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rnw\" (UniqueName: \"kubernetes.io/projected/419367a7-1838-4692-b6fc-f266985765d7-kube-api-access-m2rnw\") pod \"nova-scheduler-0\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.688095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:30:18 crc kubenswrapper[4858]: I1122 09:30:18.963078 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:30:19 crc kubenswrapper[4858]: I1122 09:30:19.021718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"419367a7-1838-4692-b6fc-f266985765d7","Type":"ContainerStarted","Data":"951a9be14148e095ca4c2e063c098b84fadfbe76d48a87be89075693d1592785"} Nov 22 09:30:19 crc kubenswrapper[4858]: I1122 09:30:19.568529 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92de51b5-1e92-49e4-942e-7d9be11a4bef" path="/var/lib/kubelet/pods/92de51b5-1e92-49e4-942e-7d9be11a4bef/volumes" Nov 22 09:30:20 crc kubenswrapper[4858]: I1122 09:30:20.036433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"419367a7-1838-4692-b6fc-f266985765d7","Type":"ContainerStarted","Data":"9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364"} Nov 22 09:30:20 crc kubenswrapper[4858]: I1122 09:30:20.071765 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.071744265 podStartE2EDuration="2.071744265s" podCreationTimestamp="2025-11-22 09:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:30:20.065177235 +0000 UTC m=+8381.906600311" watchObservedRunningTime="2025-11-22 09:30:20.071744265 +0000 UTC m=+8381.913167271" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.235775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.236339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.236645 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.236708 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.240472 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.241794 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.267541 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.273148 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.280052 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.437275 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59478d75c9-xdf7j"] Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.439803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.455659 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59478d75c9-xdf7j"] Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.542621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztvx\" (UniqueName: \"kubernetes.io/projected/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-kube-api-access-2ztvx\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.542695 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-dns-svc\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.543003 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-nb\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.543040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-config\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.543648 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-sb\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.644763 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-sb\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.644815 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ztvx\" (UniqueName: \"kubernetes.io/projected/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-kube-api-access-2ztvx\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.644865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-dns-svc\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.644887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-nb\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.644904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-config\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.645856 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-config\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.645875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-dns-svc\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.645850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-nb\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.646512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-sb\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.667417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ztvx\" (UniqueName: \"kubernetes.io/projected/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-kube-api-access-2ztvx\") pod \"dnsmasq-dns-59478d75c9-xdf7j\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:21 crc kubenswrapper[4858]: I1122 09:30:21.765440 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:22 crc kubenswrapper[4858]: I1122 09:30:22.078715 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 09:30:22 crc kubenswrapper[4858]: I1122 09:30:22.253687 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59478d75c9-xdf7j"] Nov 22 09:30:22 crc kubenswrapper[4858]: W1122 09:30:22.260695 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81f0d7b5_53a2_4d57_8d3e_fce52b6fd098.slice/crio-92f4e7078287f330b2f9db7c62cecba4b0ea383cb20036c7200124c252b4c6d0 WatchSource:0}: Error finding container 92f4e7078287f330b2f9db7c62cecba4b0ea383cb20036c7200124c252b4c6d0: Status 404 returned error can't find the container with id 92f4e7078287f330b2f9db7c62cecba4b0ea383cb20036c7200124c252b4c6d0 Nov 22 09:30:23 crc kubenswrapper[4858]: I1122 09:30:23.069122 4858 generic.go:334] "Generic (PLEG): container finished" podID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerID="d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803" exitCode=0 Nov 22 09:30:23 crc kubenswrapper[4858]: I1122 09:30:23.069736 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" event={"ID":"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098","Type":"ContainerDied","Data":"d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803"} Nov 22 09:30:23 crc kubenswrapper[4858]: I1122 09:30:23.069822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" event={"ID":"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098","Type":"ContainerStarted","Data":"92f4e7078287f330b2f9db7c62cecba4b0ea383cb20036c7200124c252b4c6d0"} Nov 22 09:30:23 crc kubenswrapper[4858]: I1122 09:30:23.688283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 09:30:24 crc kubenswrapper[4858]: I1122 09:30:24.079785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" event={"ID":"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098","Type":"ContainerStarted","Data":"c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7"} Nov 22 09:30:24 crc kubenswrapper[4858]: I1122 09:30:24.099122 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" podStartSLOduration=3.099101561 podStartE2EDuration="3.099101561s" podCreationTimestamp="2025-11-22 09:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:30:24.097094577 +0000 UTC m=+8385.938517593" watchObservedRunningTime="2025-11-22 09:30:24.099101561 +0000 UTC m=+8385.940524567" Nov 22 09:30:24 crc kubenswrapper[4858]: I1122 09:30:24.513916 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:24 crc kubenswrapper[4858]: I1122 09:30:24.514223 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-log" containerID="cri-o://ef464a3f7686d41a4099830d4c38a9d573796b3dd992d56d2f045b8d89ef85d0" gracePeriod=30 Nov 22 09:30:24 crc kubenswrapper[4858]: I1122 09:30:24.514381 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-api" containerID="cri-o://6e096e4a2ae7155d6d8c46d23f8467defc40e77a3fc4dbf0e562a5a006031139" gracePeriod=30 Nov 22 09:30:25 crc kubenswrapper[4858]: I1122 09:30:25.098559 4858 generic.go:334] "Generic (PLEG): container finished" podID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerID="ef464a3f7686d41a4099830d4c38a9d573796b3dd992d56d2f045b8d89ef85d0" exitCode=143 Nov 22 09:30:25 crc kubenswrapper[4858]: I1122 09:30:25.098644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d82031e8-bb5f-4ee0-874b-277185e0421c","Type":"ContainerDied","Data":"ef464a3f7686d41a4099830d4c38a9d573796b3dd992d56d2f045b8d89ef85d0"} Nov 22 09:30:25 crc kubenswrapper[4858]: I1122 09:30:25.098794 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.127128 4858 generic.go:334] "Generic (PLEG): container finished" podID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerID="6e096e4a2ae7155d6d8c46d23f8467defc40e77a3fc4dbf0e562a5a006031139" exitCode=0 Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.127720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d82031e8-bb5f-4ee0-874b-277185e0421c","Type":"ContainerDied","Data":"6e096e4a2ae7155d6d8c46d23f8467defc40e77a3fc4dbf0e562a5a006031139"} Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.127747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d82031e8-bb5f-4ee0-874b-277185e0421c","Type":"ContainerDied","Data":"6d7e27552151205e789943c5a3e4988d231bc486919a3360a0cb3a5c0ced8f50"} Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.127759 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d7e27552151205e789943c5a3e4988d231bc486919a3360a0cb3a5c0ced8f50" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.142272 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.180405 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9zq2\" (UniqueName: \"kubernetes.io/projected/d82031e8-bb5f-4ee0-874b-277185e0421c-kube-api-access-v9zq2\") pod \"d82031e8-bb5f-4ee0-874b-277185e0421c\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.180669 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-combined-ca-bundle\") pod \"d82031e8-bb5f-4ee0-874b-277185e0421c\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.180706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d82031e8-bb5f-4ee0-874b-277185e0421c-logs\") pod \"d82031e8-bb5f-4ee0-874b-277185e0421c\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.180762 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-config-data\") pod \"d82031e8-bb5f-4ee0-874b-277185e0421c\" (UID: \"d82031e8-bb5f-4ee0-874b-277185e0421c\") " Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.182615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82031e8-bb5f-4ee0-874b-277185e0421c-logs" (OuterVolumeSpecName: "logs") pod "d82031e8-bb5f-4ee0-874b-277185e0421c" (UID: "d82031e8-bb5f-4ee0-874b-277185e0421c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.200540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82031e8-bb5f-4ee0-874b-277185e0421c-kube-api-access-v9zq2" (OuterVolumeSpecName: "kube-api-access-v9zq2") pod "d82031e8-bb5f-4ee0-874b-277185e0421c" (UID: "d82031e8-bb5f-4ee0-874b-277185e0421c"). InnerVolumeSpecName "kube-api-access-v9zq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.210561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-config-data" (OuterVolumeSpecName: "config-data") pod "d82031e8-bb5f-4ee0-874b-277185e0421c" (UID: "d82031e8-bb5f-4ee0-874b-277185e0421c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.214245 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d82031e8-bb5f-4ee0-874b-277185e0421c" (UID: "d82031e8-bb5f-4ee0-874b-277185e0421c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.294687 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.294917 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d82031e8-bb5f-4ee0-874b-277185e0421c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.294997 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82031e8-bb5f-4ee0-874b-277185e0421c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.295076 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9zq2\" (UniqueName: \"kubernetes.io/projected/d82031e8-bb5f-4ee0-874b-277185e0421c-kube-api-access-v9zq2\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.689347 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 09:30:28 crc kubenswrapper[4858]: I1122 09:30:28.715191 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.135260 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.178797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.202102 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.215617 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.224294 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:29 crc kubenswrapper[4858]: E1122 09:30:29.225221 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-api" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.225278 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-api" Nov 22 09:30:29 crc kubenswrapper[4858]: E1122 09:30:29.225294 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-log" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.225303 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-log" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.225655 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-api" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.225692 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" containerName="nova-api-log" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.227784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.230921 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.231015 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.231411 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.235020 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.313889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjff4\" (UniqueName: \"kubernetes.io/projected/dc3d42a8-0810-462c-abd3-73b770f8fb03-kube-api-access-mjff4\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.314078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.314166 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.314269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d42a8-0810-462c-abd3-73b770f8fb03-logs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.314333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.314393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-config-data\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.416056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjff4\" (UniqueName: \"kubernetes.io/projected/dc3d42a8-0810-462c-abd3-73b770f8fb03-kube-api-access-mjff4\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.416626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.416887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.417116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d42a8-0810-462c-abd3-73b770f8fb03-logs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.417278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.417522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-config-data\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.417939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d42a8-0810-462c-abd3-73b770f8fb03-logs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.422609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.423178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.429947 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-config-data\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.434698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.435010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjff4\" (UniqueName: \"kubernetes.io/projected/dc3d42a8-0810-462c-abd3-73b770f8fb03-kube-api-access-mjff4\") pod \"nova-api-0\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " pod="openstack/nova-api-0" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.550019 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82031e8-bb5f-4ee0-874b-277185e0421c" path="/var/lib/kubelet/pods/d82031e8-bb5f-4ee0-874b-277185e0421c/volumes" Nov 22 09:30:29 crc kubenswrapper[4858]: I1122 09:30:29.550720 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:30:30 crc kubenswrapper[4858]: W1122 09:30:30.082059 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3d42a8_0810_462c_abd3_73b770f8fb03.slice/crio-3d3f75511e59101b3bf5440d6488bcafc67c4b25b4a08c5566b7aae3df94b4ee WatchSource:0}: Error finding container 3d3f75511e59101b3bf5440d6488bcafc67c4b25b4a08c5566b7aae3df94b4ee: Status 404 returned error can't find the container with id 3d3f75511e59101b3bf5440d6488bcafc67c4b25b4a08c5566b7aae3df94b4ee Nov 22 09:30:30 crc kubenswrapper[4858]: I1122 09:30:30.089296 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:30:30 crc kubenswrapper[4858]: I1122 09:30:30.145590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3d42a8-0810-462c-abd3-73b770f8fb03","Type":"ContainerStarted","Data":"3d3f75511e59101b3bf5440d6488bcafc67c4b25b4a08c5566b7aae3df94b4ee"} Nov 22 09:30:31 crc kubenswrapper[4858]: I1122 09:30:31.156008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3d42a8-0810-462c-abd3-73b770f8fb03","Type":"ContainerStarted","Data":"57907d16e0311bc717a33ae1f359ab9a46d08e1abe4ca40d8893d8086ef774ac"} Nov 22 09:30:31 crc kubenswrapper[4858]: I1122 09:30:31.156255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3d42a8-0810-462c-abd3-73b770f8fb03","Type":"ContainerStarted","Data":"e8e0687f6df23a2cb8e5fca6694574c9fc79ab632a7cbd059eef1fbf16f9f711"} Nov 22 09:30:31 crc kubenswrapper[4858]: I1122 09:30:31.191848 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.191819738 podStartE2EDuration="2.191819738s" podCreationTimestamp="2025-11-22 09:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:30:31.176762065 +0000 UTC m=+8393.018185091" watchObservedRunningTime="2025-11-22 09:30:31.191819738 +0000 UTC m=+8393.033242784" Nov 22 09:30:31 crc kubenswrapper[4858]: I1122 09:30:31.769622 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:30:31 crc kubenswrapper[4858]: I1122 09:30:31.864832 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f6f7df9-dkzs6"] Nov 22 09:30:31 crc kubenswrapper[4858]: I1122 09:30:31.865130 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" podUID="ce7cb707-25be-452d-8471-63bac50960b0" containerName="dnsmasq-dns" containerID="cri-o://f33c91ef77a22471d1aca401e99d6712203d5311feb3032dae0ac8df04c153b4" gracePeriod=10 Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.173226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" event={"ID":"ce7cb707-25be-452d-8471-63bac50960b0","Type":"ContainerDied","Data":"f33c91ef77a22471d1aca401e99d6712203d5311feb3032dae0ac8df04c153b4"} Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.173250 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce7cb707-25be-452d-8471-63bac50960b0" containerID="f33c91ef77a22471d1aca401e99d6712203d5311feb3032dae0ac8df04c153b4" exitCode=0 Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.360992 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.475410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-sb\") pod \"ce7cb707-25be-452d-8471-63bac50960b0\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.475912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-dns-svc\") pod \"ce7cb707-25be-452d-8471-63bac50960b0\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.476159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzjbw\" (UniqueName: \"kubernetes.io/projected/ce7cb707-25be-452d-8471-63bac50960b0-kube-api-access-tzjbw\") pod \"ce7cb707-25be-452d-8471-63bac50960b0\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.477138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-config\") pod \"ce7cb707-25be-452d-8471-63bac50960b0\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.477402 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-nb\") pod \"ce7cb707-25be-452d-8471-63bac50960b0\" (UID: \"ce7cb707-25be-452d-8471-63bac50960b0\") " Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.480688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7cb707-25be-452d-8471-63bac50960b0-kube-api-access-tzjbw" (OuterVolumeSpecName: "kube-api-access-tzjbw") pod "ce7cb707-25be-452d-8471-63bac50960b0" (UID: "ce7cb707-25be-452d-8471-63bac50960b0"). InnerVolumeSpecName "kube-api-access-tzjbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.536249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-config" (OuterVolumeSpecName: "config") pod "ce7cb707-25be-452d-8471-63bac50960b0" (UID: "ce7cb707-25be-452d-8471-63bac50960b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.536265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce7cb707-25be-452d-8471-63bac50960b0" (UID: "ce7cb707-25be-452d-8471-63bac50960b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.544678 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce7cb707-25be-452d-8471-63bac50960b0" (UID: "ce7cb707-25be-452d-8471-63bac50960b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.553360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce7cb707-25be-452d-8471-63bac50960b0" (UID: "ce7cb707-25be-452d-8471-63bac50960b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.580510 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.580557 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzjbw\" (UniqueName: \"kubernetes.io/projected/ce7cb707-25be-452d-8471-63bac50960b0-kube-api-access-tzjbw\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.580574 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.580589 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:32 crc kubenswrapper[4858]: I1122 09:30:32.580602 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7cb707-25be-452d-8471-63bac50960b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.184125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" event={"ID":"ce7cb707-25be-452d-8471-63bac50960b0","Type":"ContainerDied","Data":"f09a63462e508606c96c5873e2a0d23028ea012f53cd7b47212365820baa8fee"} Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.184197 4858 scope.go:117] "RemoveContainer" containerID="f33c91ef77a22471d1aca401e99d6712203d5311feb3032dae0ac8df04c153b4" Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.184409 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f6f7df9-dkzs6" Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.216678 4858 scope.go:117] "RemoveContainer" containerID="82703cf068f9f271c84721dcd01fbccd82382fb96b4e789b01bfc105b99e4b7b" Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.229523 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f6f7df9-dkzs6"] Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.238496 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75f6f7df9-dkzs6"] Nov 22 09:30:33 crc kubenswrapper[4858]: I1122 09:30:33.550257 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7cb707-25be-452d-8471-63bac50960b0" path="/var/lib/kubelet/pods/ce7cb707-25be-452d-8471-63bac50960b0/volumes" Nov 22 09:30:37 crc kubenswrapper[4858]: I1122 09:30:37.349584 4858 scope.go:117] "RemoveContainer" containerID="0b980d9c4b159ec4c43ebc38b297c40800977ddefcee229b71207360876dfad2" Nov 22 09:30:37 crc kubenswrapper[4858]: I1122 09:30:37.388341 4858 scope.go:117] "RemoveContainer" containerID="fd09b5dfdb5a00437659660bf9644fad68aad6b686f05e06d3259d27a3397a6e" Nov 22 09:30:37 crc kubenswrapper[4858]: I1122 09:30:37.427833 4858 scope.go:117] "RemoveContainer" containerID="fcff5d0ddcbefca0ce9e1379c01065d38ff4d96407cf269b7ffd530f8012bc38" Nov 22 09:30:39 crc kubenswrapper[4858]: I1122 09:30:39.563859 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:30:39 crc kubenswrapper[4858]: I1122 09:30:39.564310 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:30:40 crc kubenswrapper[4858]: I1122 09:30:40.563478 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.116:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:30:40 crc kubenswrapper[4858]: I1122 09:30:40.563482 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.116:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:30:45 crc kubenswrapper[4858]: I1122 09:30:45.312545 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:30:45 crc kubenswrapper[4858]: I1122 09:30:45.313108 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:30:49 crc kubenswrapper[4858]: I1122 09:30:49.561118 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 09:30:49 crc kubenswrapper[4858]: I1122 09:30:49.562055 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:30:49 crc kubenswrapper[4858]: I1122 09:30:49.565251 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 09:30:49 crc kubenswrapper[4858]: I1122 09:30:49.570511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 09:30:50 crc kubenswrapper[4858]: I1122 09:30:50.399677 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:30:50 crc kubenswrapper[4858]: I1122 09:30:50.407943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.615880 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f9b557965-jjcqk"] Nov 22 09:31:02 crc kubenswrapper[4858]: E1122 09:31:02.616819 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7cb707-25be-452d-8471-63bac50960b0" containerName="init" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.616832 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7cb707-25be-452d-8471-63bac50960b0" containerName="init" Nov 22 09:31:02 crc kubenswrapper[4858]: E1122 09:31:02.616851 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7cb707-25be-452d-8471-63bac50960b0" containerName="dnsmasq-dns" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.616859 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7cb707-25be-452d-8471-63bac50960b0" containerName="dnsmasq-dns" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.617039 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7cb707-25be-452d-8471-63bac50960b0" containerName="dnsmasq-dns" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.618099 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.620353 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.620897 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.621345 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7n4q2" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.621837 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.648569 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9b557965-jjcqk"] Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.665757 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.666556 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-log" containerID="cri-o://341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7" gracePeriod=30 Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.666688 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-httpd" containerID="cri-o://d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a" gracePeriod=30 Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.684935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4r2h\" (UniqueName: \"kubernetes.io/projected/f506ad00-03d0-4ac6-b172-2ff0f667abe5-kube-api-access-j4r2h\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.685375 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f506ad00-03d0-4ac6-b172-2ff0f667abe5-logs\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.685471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-config-data\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.685521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f506ad00-03d0-4ac6-b172-2ff0f667abe5-horizon-secret-key\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.685551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-scripts\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.714073 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-688d58fb47-7r59d"] Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.715993 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.730115 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-688d58fb47-7r59d"] Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.787843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f506ad00-03d0-4ac6-b172-2ff0f667abe5-horizon-secret-key\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.787893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-scripts\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.787960 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4r2h\" (UniqueName: \"kubernetes.io/projected/f506ad00-03d0-4ac6-b172-2ff0f667abe5-kube-api-access-j4r2h\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.788011 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f506ad00-03d0-4ac6-b172-2ff0f667abe5-logs\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.788065 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-config-data\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.789200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-config-data\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.791739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f506ad00-03d0-4ac6-b172-2ff0f667abe5-logs\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.794233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-scripts\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.799045 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.799358 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-log" containerID="cri-o://62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db" gracePeriod=30 Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.799902 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-httpd" containerID="cri-o://060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92" gracePeriod=30 Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.801378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f506ad00-03d0-4ac6-b172-2ff0f667abe5-horizon-secret-key\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.813034 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4r2h\" (UniqueName: \"kubernetes.io/projected/f506ad00-03d0-4ac6-b172-2ff0f667abe5-kube-api-access-j4r2h\") pod \"horizon-7f9b557965-jjcqk\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.890956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-scripts\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.891016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0606c542-bce1-4395-9dd0-e969035176e8-horizon-secret-key\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.891784 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5n7\" (UniqueName: \"kubernetes.io/projected/0606c542-bce1-4395-9dd0-e969035176e8-kube-api-access-tv5n7\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.891927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0606c542-bce1-4395-9dd0-e969035176e8-logs\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.891975 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-config-data\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.956250 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.993378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0606c542-bce1-4395-9dd0-e969035176e8-logs\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.993428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-config-data\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.993472 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-scripts\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.993497 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0606c542-bce1-4395-9dd0-e969035176e8-horizon-secret-key\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.993572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv5n7\" (UniqueName: \"kubernetes.io/projected/0606c542-bce1-4395-9dd0-e969035176e8-kube-api-access-tv5n7\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.993879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0606c542-bce1-4395-9dd0-e969035176e8-logs\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.994295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-scripts\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.994771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-config-data\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:02 crc kubenswrapper[4858]: I1122 09:31:02.996781 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0606c542-bce1-4395-9dd0-e969035176e8-horizon-secret-key\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.010650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv5n7\" (UniqueName: \"kubernetes.io/projected/0606c542-bce1-4395-9dd0-e969035176e8-kube-api-access-tv5n7\") pod \"horizon-688d58fb47-7r59d\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.033337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.397444 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9b557965-jjcqk"] Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.404715 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.536336 4858 generic.go:334] "Generic (PLEG): container finished" podID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerID="341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7" exitCode=143 Nov 22 09:31:03 crc kubenswrapper[4858]: W1122 09:31:03.536507 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0606c542_bce1_4395_9dd0_e969035176e8.slice/crio-1a67d7b5f1b2cf65c90cf465a6bceea8db0279edd91f5861441f676059ad217a WatchSource:0}: Error finding container 1a67d7b5f1b2cf65c90cf465a6bceea8db0279edd91f5861441f676059ad217a: Status 404 returned error can't find the container with id 1a67d7b5f1b2cf65c90cf465a6bceea8db0279edd91f5861441f676059ad217a Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.539052 4858 generic.go:334] "Generic (PLEG): container finished" podID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerID="62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db" exitCode=143 Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.547780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2e5858c-c5a3-4ada-910a-451cae38681d","Type":"ContainerDied","Data":"341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7"} Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.547821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4d7f370-0917-41d8-99eb-9995b65aa253","Type":"ContainerDied","Data":"62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db"} Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.547837 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-688d58fb47-7r59d"] Nov 22 09:31:03 crc kubenswrapper[4858]: I1122 09:31:03.547853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9b557965-jjcqk" event={"ID":"f506ad00-03d0-4ac6-b172-2ff0f667abe5","Type":"ContainerStarted","Data":"8b159a4d95d2094b9ad06a233f730b06e51e42481ba640df0a34d8812575719f"} Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.125721 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-688d58fb47-7r59d"] Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.148899 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f54b85744-6qlnr"] Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.154991 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.165068 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.171374 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f54b85744-6qlnr"] Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.223922 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f9b557965-jjcqk"] Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.252912 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65ddb89f8-tmrff"] Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.254476 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.264410 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65ddb89f8-tmrff"] Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.320258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-scripts\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.320333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-tls-certs\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.320363 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-combined-ca-bundle\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.320388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-secret-key\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.320814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbfv8\" (UniqueName: \"kubernetes.io/projected/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-kube-api-access-xbfv8\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.320974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-logs\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.321141 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-config-data\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-combined-ca-bundle\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jbv\" (UniqueName: \"kubernetes.io/projected/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-kube-api-access-f7jbv\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-logs\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-secret-key\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbfv8\" (UniqueName: \"kubernetes.io/projected/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-kube-api-access-xbfv8\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-config-data\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-logs\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-config-data\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.424945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-tls-certs\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.425028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-scripts\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.425160 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-tls-certs\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.425244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-combined-ca-bundle\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.425292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-secret-key\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.425339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-scripts\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.425380 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-logs\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.426077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-config-data\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.426803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-scripts\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.430994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-combined-ca-bundle\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.438258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-secret-key\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.439809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-tls-certs\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.441053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbfv8\" (UniqueName: \"kubernetes.io/projected/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-kube-api-access-xbfv8\") pod \"horizon-7f54b85744-6qlnr\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.499379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-combined-ca-bundle\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527399 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jbv\" (UniqueName: \"kubernetes.io/projected/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-kube-api-access-f7jbv\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-logs\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-secret-key\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527493 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-config-data\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527570 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-tls-certs\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.527634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-scripts\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.529451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-scripts\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.530029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-config-data\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.530656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-logs\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.534869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-tls-certs\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.535338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-secret-key\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.536131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-combined-ca-bundle\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.545858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jbv\" (UniqueName: \"kubernetes.io/projected/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-kube-api-access-f7jbv\") pod \"horizon-65ddb89f8-tmrff\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.561401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-688d58fb47-7r59d" event={"ID":"0606c542-bce1-4395-9dd0-e969035176e8","Type":"ContainerStarted","Data":"1a67d7b5f1b2cf65c90cf465a6bceea8db0279edd91f5861441f676059ad217a"} Nov 22 09:31:04 crc kubenswrapper[4858]: I1122 09:31:04.576419 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:05 crc kubenswrapper[4858]: I1122 09:31:05.009949 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f54b85744-6qlnr"] Nov 22 09:31:05 crc kubenswrapper[4858]: I1122 09:31:05.109297 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65ddb89f8-tmrff"] Nov 22 09:31:05 crc kubenswrapper[4858]: W1122 09:31:05.119452 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea7f39a_d3f5_4fc7_b08e_075d7806ba96.slice/crio-e3669bb7e2804594052dbbfe6987a2e1eda53b769d6992c5d6e3251c1fc00d2d WatchSource:0}: Error finding container e3669bb7e2804594052dbbfe6987a2e1eda53b769d6992c5d6e3251c1fc00d2d: Status 404 returned error can't find the container with id e3669bb7e2804594052dbbfe6987a2e1eda53b769d6992c5d6e3251c1fc00d2d Nov 22 09:31:05 crc kubenswrapper[4858]: I1122 09:31:05.569826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f54b85744-6qlnr" event={"ID":"3d6b5396-d50b-4f98-a9cd-5a2595cd610c","Type":"ContainerStarted","Data":"8c6e7938479a6abe935b5c901546924ff4f3131347ef2985b346f144efa2db14"} Nov 22 09:31:05 crc kubenswrapper[4858]: I1122 09:31:05.571188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ddb89f8-tmrff" event={"ID":"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96","Type":"ContainerStarted","Data":"e3669bb7e2804594052dbbfe6987a2e1eda53b769d6992c5d6e3251c1fc00d2d"} Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.367789 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468209 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-config-data\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-httpd-run\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-logs\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-combined-ca-bundle\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468463 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-public-tls-certs\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fktwq\" (UniqueName: \"kubernetes.io/projected/b2e5858c-c5a3-4ada-910a-451cae38681d-kube-api-access-fktwq\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.468592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-scripts\") pod \"b2e5858c-c5a3-4ada-910a-451cae38681d\" (UID: \"b2e5858c-c5a3-4ada-910a-451cae38681d\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.469016 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.470994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-logs" (OuterVolumeSpecName: "logs") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.474338 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e5858c-c5a3-4ada-910a-451cae38681d-kube-api-access-fktwq" (OuterVolumeSpecName: "kube-api-access-fktwq") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "kube-api-access-fktwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.492526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-scripts" (OuterVolumeSpecName: "scripts") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.499338 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.536620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.540774 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.541801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-config-data" (OuterVolumeSpecName: "config-data") pod "b2e5858c-c5a3-4ada-910a-451cae38681d" (UID: "b2e5858c-c5a3-4ada-910a-451cae38681d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573492 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fktwq\" (UniqueName: \"kubernetes.io/projected/b2e5858c-c5a3-4ada-910a-451cae38681d-kube-api-access-fktwq\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573524 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573534 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573543 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573552 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e5858c-c5a3-4ada-910a-451cae38681d-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573561 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.573570 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5858c-c5a3-4ada-910a-451cae38681d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.593791 4858 generic.go:334] "Generic (PLEG): container finished" podID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerID="060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92" exitCode=0 Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.593908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4d7f370-0917-41d8-99eb-9995b65aa253","Type":"ContainerDied","Data":"060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92"} Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.593964 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.594166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4d7f370-0917-41d8-99eb-9995b65aa253","Type":"ContainerDied","Data":"26bb6b3d70431d043f9b2f2b2ee59d1bf59c9a5fb9fb787766bbbc2d08ec5fd4"} Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.594292 4858 scope.go:117] "RemoveContainer" containerID="060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.604865 4858 generic.go:334] "Generic (PLEG): container finished" podID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerID="d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a" exitCode=0 Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.604905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2e5858c-c5a3-4ada-910a-451cae38681d","Type":"ContainerDied","Data":"d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a"} Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.604930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2e5858c-c5a3-4ada-910a-451cae38681d","Type":"ContainerDied","Data":"b1ac3a35ddbed4b08026764e2cdebc03589d43682d34e69bc24b29fc715b5fbd"} Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.605010 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.648423 4858 scope.go:117] "RemoveContainer" containerID="62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.660896 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.673451 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674152 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-scripts\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674194 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-config-data\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674224 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-combined-ca-bundle\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674301 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-httpd-run\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-logs\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-internal-tls-certs\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.674484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjvts\" (UniqueName: \"kubernetes.io/projected/e4d7f370-0917-41d8-99eb-9995b65aa253-kube-api-access-fjvts\") pod \"e4d7f370-0917-41d8-99eb-9995b65aa253\" (UID: \"e4d7f370-0917-41d8-99eb-9995b65aa253\") " Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.680071 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-logs" (OuterVolumeSpecName: "logs") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.680422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d7f370-0917-41d8-99eb-9995b65aa253-kube-api-access-fjvts" (OuterVolumeSpecName: "kube-api-access-fjvts") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "kube-api-access-fjvts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.680724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.685574 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-scripts" (OuterVolumeSpecName: "scripts") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.686428 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.686911 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-httpd" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.686924 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-httpd" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.686935 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-log" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.686941 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-log" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.686971 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-log" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.686978 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-log" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.686995 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-httpd" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.687001 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-httpd" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.687176 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-httpd" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.687195 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-httpd" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.687204 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" containerName="glance-log" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.687217 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" containerName="glance-log" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.688387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.690202 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.690537 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.694263 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.722393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.729701 4858 scope.go:117] "RemoveContainer" containerID="060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.730226 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92\": container with ID starting with 060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92 not found: ID does not exist" containerID="060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.730264 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92"} err="failed to get container status \"060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92\": rpc error: code = NotFound desc = could not find container \"060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92\": container with ID starting with 060d3755f6fa78176a173aa5d0c5c77343037e1e79691285f4c73ee112cb1f92 not found: ID does not exist" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.730290 4858 scope.go:117] "RemoveContainer" containerID="62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.730677 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db\": container with ID starting with 62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db not found: ID does not exist" containerID="62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.730732 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db"} err="failed to get container status \"62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db\": rpc error: code = NotFound desc = could not find container \"62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db\": container with ID starting with 62928c4af2ab281f49c7b5452c28eb6c9417e8e8ddff1ff4f5397a16555936db not found: ID does not exist" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.730763 4858 scope.go:117] "RemoveContainer" containerID="d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.737484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-config-data" (OuterVolumeSpecName: "config-data") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.757864 4858 scope.go:117] "RemoveContainer" containerID="341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.771444 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e4d7f370-0917-41d8-99eb-9995b65aa253" (UID: "e4d7f370-0917-41d8-99eb-9995b65aa253"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777759 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777810 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d7f370-0917-41d8-99eb-9995b65aa253-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777825 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777840 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjvts\" (UniqueName: \"kubernetes.io/projected/e4d7f370-0917-41d8-99eb-9995b65aa253-kube-api-access-fjvts\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777853 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777864 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.777877 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d7f370-0917-41d8-99eb-9995b65aa253-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.785799 4858 scope.go:117] "RemoveContainer" containerID="d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.786163 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a\": container with ID starting with d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a not found: ID does not exist" containerID="d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.786195 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a"} err="failed to get container status \"d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a\": rpc error: code = NotFound desc = could not find container \"d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a\": container with ID starting with d5e783c3cb6a9ed8e649679e6fba68431e0ecbbe18a86bee4bd7f7295dc6612a not found: ID does not exist" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.786215 4858 scope.go:117] "RemoveContainer" containerID="341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7" Nov 22 09:31:06 crc kubenswrapper[4858]: E1122 09:31:06.786632 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7\": container with ID starting with 341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7 not found: ID does not exist" containerID="341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.786654 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7"} err="failed to get container status \"341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7\": rpc error: code = NotFound desc = could not find container \"341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7\": container with ID starting with 341876230c373b7f506acb39c0fd2fe8463080d2f198c87b2ea330da08efafd7 not found: ID does not exist" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.880664 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-logs\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.880755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-scripts\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.880827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.881007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.881049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.881092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6s9\" (UniqueName: \"kubernetes.io/projected/1659016a-e2b7-4dbd-8ad1-56bef9995d64-kube-api-access-mt6s9\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.881422 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-config-data\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.980991 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.983692 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-config-data\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.983838 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-logs\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.983871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-scripts\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.983948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.984010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.984028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.984348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6s9\" (UniqueName: \"kubernetes.io/projected/1659016a-e2b7-4dbd-8ad1-56bef9995d64-kube-api-access-mt6s9\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.984806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.987992 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.991418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-config-data\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.993371 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-logs\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:06 crc kubenswrapper[4858]: I1122 09:31:06.995760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-scripts\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.001502 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.010138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6s9\" (UniqueName: \"kubernetes.io/projected/1659016a-e2b7-4dbd-8ad1-56bef9995d64-kube-api-access-mt6s9\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.010606 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " pod="openstack/glance-default-external-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.013638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.018999 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.021683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.026513 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.032823 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.033414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.188578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.189890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.189982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.190384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.190446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w962\" (UniqueName: \"kubernetes.io/projected/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-kube-api-access-8w962\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.190523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.190665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296588 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w962\" (UniqueName: \"kubernetes.io/projected/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-kube-api-access-8w962\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296622 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.296741 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.297201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.302180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.303247 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.304682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.304964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.306863 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.316670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w962\" (UniqueName: \"kubernetes.io/projected/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-kube-api-access-8w962\") pod \"glance-default-internal-api-0\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.404525 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.556203 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e5858c-c5a3-4ada-910a-451cae38681d" path="/var/lib/kubelet/pods/b2e5858c-c5a3-4ada-910a-451cae38681d/volumes" Nov 22 09:31:07 crc kubenswrapper[4858]: I1122 09:31:07.557035 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d7f370-0917-41d8-99eb-9995b65aa253" path="/var/lib/kubelet/pods/e4d7f370-0917-41d8-99eb-9995b65aa253/volumes" Nov 22 09:31:12 crc kubenswrapper[4858]: I1122 09:31:12.998519 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:31:13 crc kubenswrapper[4858]: W1122 09:31:13.008831 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3945c5e_f85e_4fa1_b48f_1ec3bbc20d70.slice/crio-78b2b153ac1b3d9aed94d5ae9675a45b8e47b2a4cc4aa949dfee02e41d3b4cd6 WatchSource:0}: Error finding container 78b2b153ac1b3d9aed94d5ae9675a45b8e47b2a4cc4aa949dfee02e41d3b4cd6: Status 404 returned error can't find the container with id 78b2b153ac1b3d9aed94d5ae9675a45b8e47b2a4cc4aa949dfee02e41d3b4cd6 Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.585711 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.941091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9b557965-jjcqk" event={"ID":"f506ad00-03d0-4ac6-b172-2ff0f667abe5","Type":"ContainerStarted","Data":"5c74ad98a26b5abf2505019b8830b75720846382cae9001a96ee50c0ef31e4c6"} Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.943023 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9b557965-jjcqk" event={"ID":"f506ad00-03d0-4ac6-b172-2ff0f667abe5","Type":"ContainerStarted","Data":"b21c85e551fc5e0dc3400d220584dcb594a68ce78dcf67ac5c576b6a5cab55d1"} Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.941563 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f9b557965-jjcqk" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon" containerID="cri-o://b21c85e551fc5e0dc3400d220584dcb594a68ce78dcf67ac5c576b6a5cab55d1" gracePeriod=30 Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.941373 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f9b557965-jjcqk" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon-log" containerID="cri-o://5c74ad98a26b5abf2505019b8830b75720846382cae9001a96ee50c0ef31e4c6" gracePeriod=30 Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.950401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-688d58fb47-7r59d" event={"ID":"0606c542-bce1-4395-9dd0-e969035176e8","Type":"ContainerStarted","Data":"26fccc2831fe719dcae1f491e8756664bbc10b9296a7e26511127930fb850eb3"} Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.950456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-688d58fb47-7r59d" event={"ID":"0606c542-bce1-4395-9dd0-e969035176e8","Type":"ContainerStarted","Data":"a1b8f2ccfd39761ee60ef25f03ac3aaca90b8092dda767f2bd5a5ea9a13d2e9e"} Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.950592 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-688d58fb47-7r59d" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon-log" containerID="cri-o://26fccc2831fe719dcae1f491e8756664bbc10b9296a7e26511127930fb850eb3" gracePeriod=30 Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.950700 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-688d58fb47-7r59d" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon" containerID="cri-o://a1b8f2ccfd39761ee60ef25f03ac3aaca90b8092dda767f2bd5a5ea9a13d2e9e" gracePeriod=30 Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.964287 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f9b557965-jjcqk" podStartSLOduration=3.180481424 podStartE2EDuration="11.964260749s" podCreationTimestamp="2025-11-22 09:31:02 +0000 UTC" firstStartedPulling="2025-11-22 09:31:03.404487128 +0000 UTC m=+8425.245910134" lastFinishedPulling="2025-11-22 09:31:12.188266443 +0000 UTC m=+8434.029689459" observedRunningTime="2025-11-22 09:31:13.960864481 +0000 UTC m=+8435.802287487" watchObservedRunningTime="2025-11-22 09:31:13.964260749 +0000 UTC m=+8435.805683765" Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.977399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f54b85744-6qlnr" event={"ID":"3d6b5396-d50b-4f98-a9cd-5a2595cd610c","Type":"ContainerStarted","Data":"a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2"} Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.977725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f54b85744-6qlnr" event={"ID":"3d6b5396-d50b-4f98-a9cd-5a2595cd610c","Type":"ContainerStarted","Data":"8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b"} Nov 22 09:31:13 crc kubenswrapper[4858]: I1122 09:31:13.984362 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1659016a-e2b7-4dbd-8ad1-56bef9995d64","Type":"ContainerStarted","Data":"f6b7add04a797235d39d3adaefa419ff364a9af11f9a24949843784e159f7c9d"} Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.003668 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-688d58fb47-7r59d" podStartSLOduration=3.361584911 podStartE2EDuration="12.003647719s" podCreationTimestamp="2025-11-22 09:31:02 +0000 UTC" firstStartedPulling="2025-11-22 09:31:03.537655581 +0000 UTC m=+8425.379078587" lastFinishedPulling="2025-11-22 09:31:12.179718369 +0000 UTC m=+8434.021141395" observedRunningTime="2025-11-22 09:31:13.985670324 +0000 UTC m=+8435.827093330" watchObservedRunningTime="2025-11-22 09:31:14.003647719 +0000 UTC m=+8435.845070725" Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.015809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70","Type":"ContainerStarted","Data":"ff441736a1f0bdc42df1f5f8ac8566ce878fc447391e44d3d92513cd53973a0c"} Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.016254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70","Type":"ContainerStarted","Data":"78b2b153ac1b3d9aed94d5ae9675a45b8e47b2a4cc4aa949dfee02e41d3b4cd6"} Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.021903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ddb89f8-tmrff" event={"ID":"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96","Type":"ContainerStarted","Data":"2a918e4fcbaf137796a0b1904f7b3db87767ea1e2d0f817872ea17bfa2bef504"} Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.021931 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ddb89f8-tmrff" event={"ID":"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96","Type":"ContainerStarted","Data":"72ca74897600f02e691c62c445465e8f3abb5ca42e51913fe100ca110fa41167"} Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.045367 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f54b85744-6qlnr" podStartSLOduration=2.879140507 podStartE2EDuration="10.045347835s" podCreationTimestamp="2025-11-22 09:31:04 +0000 UTC" firstStartedPulling="2025-11-22 09:31:05.053488041 +0000 UTC m=+8426.894911047" lastFinishedPulling="2025-11-22 09:31:12.219695359 +0000 UTC m=+8434.061118375" observedRunningTime="2025-11-22 09:31:14.019203088 +0000 UTC m=+8435.860626104" watchObservedRunningTime="2025-11-22 09:31:14.045347835 +0000 UTC m=+8435.886770851" Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.046247 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65ddb89f8-tmrff" podStartSLOduration=2.919461087 podStartE2EDuration="10.046242493s" podCreationTimestamp="2025-11-22 09:31:04 +0000 UTC" firstStartedPulling="2025-11-22 09:31:05.121085124 +0000 UTC m=+8426.962508130" lastFinishedPulling="2025-11-22 09:31:12.24786653 +0000 UTC m=+8434.089289536" observedRunningTime="2025-11-22 09:31:14.044780566 +0000 UTC m=+8435.886203572" watchObservedRunningTime="2025-11-22 09:31:14.046242493 +0000 UTC m=+8435.887665489" Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.500013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.500522 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.577791 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:14 crc kubenswrapper[4858]: I1122 09:31:14.577827 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.033706 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1659016a-e2b7-4dbd-8ad1-56bef9995d64","Type":"ContainerStarted","Data":"428bc38b18119c4305d118eb828b9d35bf76f7f0732bf893cb1b34f626cfecdb"} Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.034033 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1659016a-e2b7-4dbd-8ad1-56bef9995d64","Type":"ContainerStarted","Data":"0aa85a37e97e72c8efa2b73911f4eef75c838f7fb6915cad5a5299b8caecf2b7"} Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.036714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70","Type":"ContainerStarted","Data":"2616f6e010c2f47567c82c59233a83474d6307221bc0e3019310b01ca819c5e0"} Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.085856 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.085827632 podStartE2EDuration="9.085827632s" podCreationTimestamp="2025-11-22 09:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:31:15.082975831 +0000 UTC m=+8436.924398857" watchObservedRunningTime="2025-11-22 09:31:15.085827632 +0000 UTC m=+8436.927250658" Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.089746 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.089718996 podStartE2EDuration="9.089718996s" podCreationTimestamp="2025-11-22 09:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:31:15.066435182 +0000 UTC m=+8436.907858198" watchObservedRunningTime="2025-11-22 09:31:15.089718996 +0000 UTC m=+8436.931142022" Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.321853 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:31:15 crc kubenswrapper[4858]: I1122 09:31:15.321908 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.014514 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.015843 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.054776 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.060046 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.061731 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.406212 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.406600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.457033 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:17 crc kubenswrapper[4858]: I1122 09:31:17.476926 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:18 crc kubenswrapper[4858]: I1122 09:31:18.074108 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 09:31:18 crc kubenswrapper[4858]: I1122 09:31:18.076292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:18 crc kubenswrapper[4858]: I1122 09:31:18.076624 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:20 crc kubenswrapper[4858]: I1122 09:31:20.071492 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:20 crc kubenswrapper[4858]: I1122 09:31:20.081157 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 09:31:21 crc kubenswrapper[4858]: I1122 09:31:21.187870 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 09:31:21 crc kubenswrapper[4858]: I1122 09:31:21.190412 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 09:31:22 crc kubenswrapper[4858]: I1122 09:31:22.957777 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:23 crc kubenswrapper[4858]: I1122 09:31:23.034332 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:24 crc kubenswrapper[4858]: I1122 09:31:24.501717 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.119:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.119:8443: connect: connection refused" Nov 22 09:31:24 crc kubenswrapper[4858]: I1122 09:31:24.580153 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65ddb89f8-tmrff" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.120:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8443: connect: connection refused" Nov 22 09:31:36 crc kubenswrapper[4858]: I1122 09:31:36.313686 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:36 crc kubenswrapper[4858]: I1122 09:31:36.423735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:31:37 crc kubenswrapper[4858]: I1122 09:31:37.917983 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:31:37 crc kubenswrapper[4858]: I1122 09:31:37.976943 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f54b85744-6qlnr"] Nov 22 09:31:37 crc kubenswrapper[4858]: I1122 09:31:37.980504 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon-log" containerID="cri-o://a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2" gracePeriod=30 Nov 22 09:31:37 crc kubenswrapper[4858]: I1122 09:31:37.980535 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" containerID="cri-o://8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b" gracePeriod=30 Nov 22 09:31:37 crc kubenswrapper[4858]: I1122 09:31:37.993313 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.119:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Nov 22 09:31:41 crc kubenswrapper[4858]: I1122 09:31:41.385495 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.119:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:60298->10.217.1.119:8443: read: connection reset by peer" Nov 22 09:31:42 crc kubenswrapper[4858]: I1122 09:31:42.391912 4858 generic.go:334] "Generic (PLEG): container finished" podID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerID="8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b" exitCode=0 Nov 22 09:31:42 crc kubenswrapper[4858]: I1122 09:31:42.391995 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f54b85744-6qlnr" event={"ID":"3d6b5396-d50b-4f98-a9cd-5a2595cd610c","Type":"ContainerDied","Data":"8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b"} Nov 22 09:31:44 crc kubenswrapper[4858]: E1122 09:31:44.299228 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0606c542_bce1_4395_9dd0_e969035176e8.slice/crio-conmon-a1b8f2ccfd39761ee60ef25f03ac3aaca90b8092dda767f2bd5a5ea9a13d2e9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf506ad00_03d0_4ac6_b172_2ff0f667abe5.slice/crio-conmon-b21c85e551fc5e0dc3400d220584dcb594a68ce78dcf67ac5c576b6a5cab55d1.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.464823 4858 generic.go:334] "Generic (PLEG): container finished" podID="0606c542-bce1-4395-9dd0-e969035176e8" containerID="a1b8f2ccfd39761ee60ef25f03ac3aaca90b8092dda767f2bd5a5ea9a13d2e9e" exitCode=137 Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.464856 4858 generic.go:334] "Generic (PLEG): container finished" podID="0606c542-bce1-4395-9dd0-e969035176e8" containerID="26fccc2831fe719dcae1f491e8756664bbc10b9296a7e26511127930fb850eb3" exitCode=137 Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.464917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-688d58fb47-7r59d" event={"ID":"0606c542-bce1-4395-9dd0-e969035176e8","Type":"ContainerDied","Data":"a1b8f2ccfd39761ee60ef25f03ac3aaca90b8092dda767f2bd5a5ea9a13d2e9e"} Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.464944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-688d58fb47-7r59d" event={"ID":"0606c542-bce1-4395-9dd0-e969035176e8","Type":"ContainerDied","Data":"26fccc2831fe719dcae1f491e8756664bbc10b9296a7e26511127930fb850eb3"} Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.464954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-688d58fb47-7r59d" event={"ID":"0606c542-bce1-4395-9dd0-e969035176e8","Type":"ContainerDied","Data":"1a67d7b5f1b2cf65c90cf465a6bceea8db0279edd91f5861441f676059ad217a"} Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.464963 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a67d7b5f1b2cf65c90cf465a6bceea8db0279edd91f5861441f676059ad217a" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.466584 4858 generic.go:334] "Generic (PLEG): container finished" podID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerID="b21c85e551fc5e0dc3400d220584dcb594a68ce78dcf67ac5c576b6a5cab55d1" exitCode=137 Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.466599 4858 generic.go:334] "Generic (PLEG): container finished" podID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerID="5c74ad98a26b5abf2505019b8830b75720846382cae9001a96ee50c0ef31e4c6" exitCode=137 Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.466614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9b557965-jjcqk" event={"ID":"f506ad00-03d0-4ac6-b172-2ff0f667abe5","Type":"ContainerDied","Data":"b21c85e551fc5e0dc3400d220584dcb594a68ce78dcf67ac5c576b6a5cab55d1"} Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.466629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9b557965-jjcqk" event={"ID":"f506ad00-03d0-4ac6-b172-2ff0f667abe5","Type":"ContainerDied","Data":"5c74ad98a26b5abf2505019b8830b75720846382cae9001a96ee50c0ef31e4c6"} Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.466638 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9b557965-jjcqk" event={"ID":"f506ad00-03d0-4ac6-b172-2ff0f667abe5","Type":"ContainerDied","Data":"8b159a4d95d2094b9ad06a233f730b06e51e42481ba640df0a34d8812575719f"} Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.466646 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b159a4d95d2094b9ad06a233f730b06e51e42481ba640df0a34d8812575719f" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.481864 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.496282 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.500221 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.119:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.119:8443: connect: connection refused" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.627752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv5n7\" (UniqueName: \"kubernetes.io/projected/0606c542-bce1-4395-9dd0-e969035176e8-kube-api-access-tv5n7\") pod \"0606c542-bce1-4395-9dd0-e969035176e8\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628103 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-config-data\") pod \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628169 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0606c542-bce1-4395-9dd0-e969035176e8-horizon-secret-key\") pod \"0606c542-bce1-4395-9dd0-e969035176e8\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f506ad00-03d0-4ac6-b172-2ff0f667abe5-logs\") pod \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-config-data\") pod \"0606c542-bce1-4395-9dd0-e969035176e8\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f506ad00-03d0-4ac6-b172-2ff0f667abe5-horizon-secret-key\") pod \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628363 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-scripts\") pod \"0606c542-bce1-4395-9dd0-e969035176e8\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-scripts\") pod \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628496 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0606c542-bce1-4395-9dd0-e969035176e8-logs\") pod \"0606c542-bce1-4395-9dd0-e969035176e8\" (UID: \"0606c542-bce1-4395-9dd0-e969035176e8\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.628543 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4r2h\" (UniqueName: \"kubernetes.io/projected/f506ad00-03d0-4ac6-b172-2ff0f667abe5-kube-api-access-j4r2h\") pod \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\" (UID: \"f506ad00-03d0-4ac6-b172-2ff0f667abe5\") " Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.631693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0606c542-bce1-4395-9dd0-e969035176e8-logs" (OuterVolumeSpecName: "logs") pod "0606c542-bce1-4395-9dd0-e969035176e8" (UID: "0606c542-bce1-4395-9dd0-e969035176e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.633143 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f506ad00-03d0-4ac6-b172-2ff0f667abe5-logs" (OuterVolumeSpecName: "logs") pod "f506ad00-03d0-4ac6-b172-2ff0f667abe5" (UID: "f506ad00-03d0-4ac6-b172-2ff0f667abe5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.637293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f506ad00-03d0-4ac6-b172-2ff0f667abe5-kube-api-access-j4r2h" (OuterVolumeSpecName: "kube-api-access-j4r2h") pod "f506ad00-03d0-4ac6-b172-2ff0f667abe5" (UID: "f506ad00-03d0-4ac6-b172-2ff0f667abe5"). InnerVolumeSpecName "kube-api-access-j4r2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.639041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0606c542-bce1-4395-9dd0-e969035176e8-kube-api-access-tv5n7" (OuterVolumeSpecName: "kube-api-access-tv5n7") pod "0606c542-bce1-4395-9dd0-e969035176e8" (UID: "0606c542-bce1-4395-9dd0-e969035176e8"). InnerVolumeSpecName "kube-api-access-tv5n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.648301 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f506ad00-03d0-4ac6-b172-2ff0f667abe5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f506ad00-03d0-4ac6-b172-2ff0f667abe5" (UID: "f506ad00-03d0-4ac6-b172-2ff0f667abe5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.649131 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0606c542-bce1-4395-9dd0-e969035176e8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0606c542-bce1-4395-9dd0-e969035176e8" (UID: "0606c542-bce1-4395-9dd0-e969035176e8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.655738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-scripts" (OuterVolumeSpecName: "scripts") pod "f506ad00-03d0-4ac6-b172-2ff0f667abe5" (UID: "f506ad00-03d0-4ac6-b172-2ff0f667abe5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.658730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-config-data" (OuterVolumeSpecName: "config-data") pod "f506ad00-03d0-4ac6-b172-2ff0f667abe5" (UID: "f506ad00-03d0-4ac6-b172-2ff0f667abe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.663138 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-config-data" (OuterVolumeSpecName: "config-data") pod "0606c542-bce1-4395-9dd0-e969035176e8" (UID: "0606c542-bce1-4395-9dd0-e969035176e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.664013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-scripts" (OuterVolumeSpecName: "scripts") pod "0606c542-bce1-4395-9dd0-e969035176e8" (UID: "0606c542-bce1-4395-9dd0-e969035176e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732469 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4r2h\" (UniqueName: \"kubernetes.io/projected/f506ad00-03d0-4ac6-b172-2ff0f667abe5-kube-api-access-j4r2h\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732508 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv5n7\" (UniqueName: \"kubernetes.io/projected/0606c542-bce1-4395-9dd0-e969035176e8-kube-api-access-tv5n7\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732519 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732529 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0606c542-bce1-4395-9dd0-e969035176e8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732539 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f506ad00-03d0-4ac6-b172-2ff0f667abe5-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732551 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732562 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f506ad00-03d0-4ac6-b172-2ff0f667abe5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732572 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0606c542-bce1-4395-9dd0-e969035176e8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732581 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f506ad00-03d0-4ac6-b172-2ff0f667abe5-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:44 crc kubenswrapper[4858]: I1122 09:31:44.732590 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0606c542-bce1-4395-9dd0-e969035176e8-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.312661 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.312723 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.312789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.313837 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01b154540af086555e8de88df2c8cf3032eaed4484d3077288bd94301afb3099"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.313922 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://01b154540af086555e8de88df2c8cf3032eaed4484d3077288bd94301afb3099" gracePeriod=600 Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.479912 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="01b154540af086555e8de88df2c8cf3032eaed4484d3077288bd94301afb3099" exitCode=0 Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.480008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9b557965-jjcqk" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.480459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"01b154540af086555e8de88df2c8cf3032eaed4484d3077288bd94301afb3099"} Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.480529 4858 scope.go:117] "RemoveContainer" containerID="8cd15dc3ca06cfced21143529be1b6c04ba7f310cb952349cda5986cf7b8a417" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.480555 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-688d58fb47-7r59d" Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.546982 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-688d58fb47-7r59d"] Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.549860 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-688d58fb47-7r59d"] Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.556711 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f9b557965-jjcqk"] Nov 22 09:31:45 crc kubenswrapper[4858]: I1122 09:31:45.567207 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f9b557965-jjcqk"] Nov 22 09:31:46 crc kubenswrapper[4858]: I1122 09:31:46.492111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82"} Nov 22 09:31:47 crc kubenswrapper[4858]: I1122 09:31:47.548951 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0606c542-bce1-4395-9dd0-e969035176e8" path="/var/lib/kubelet/pods/0606c542-bce1-4395-9dd0-e969035176e8/volumes" Nov 22 09:31:47 crc kubenswrapper[4858]: I1122 09:31:47.550922 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" path="/var/lib/kubelet/pods/f506ad00-03d0-4ac6-b172-2ff0f667abe5/volumes" Nov 22 09:31:54 crc kubenswrapper[4858]: I1122 09:31:54.500309 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.119:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.119:8443: connect: connection refused" Nov 22 09:32:04 crc kubenswrapper[4858]: I1122 09:32:04.500233 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f54b85744-6qlnr" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.119:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.119:8443: connect: connection refused" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.356232 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437380 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-config-data\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-scripts\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437682 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-combined-ca-bundle\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-tls-certs\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-logs\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbfv8\" (UniqueName: \"kubernetes.io/projected/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-kube-api-access-xbfv8\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.437911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-secret-key\") pod \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\" (UID: \"3d6b5396-d50b-4f98-a9cd-5a2595cd610c\") " Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.438895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-logs" (OuterVolumeSpecName: "logs") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.443967 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-kube-api-access-xbfv8" (OuterVolumeSpecName: "kube-api-access-xbfv8") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "kube-api-access-xbfv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.444483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.465956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.472135 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-scripts" (OuterVolumeSpecName: "scripts") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.481995 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-config-data" (OuterVolumeSpecName: "config-data") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.492095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3d6b5396-d50b-4f98-a9cd-5a2595cd610c" (UID: "3d6b5396-d50b-4f98-a9cd-5a2595cd610c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540230 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540282 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540303 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540352 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540371 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540391 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbfv8\" (UniqueName: \"kubernetes.io/projected/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-kube-api-access-xbfv8\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.540410 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3d6b5396-d50b-4f98-a9cd-5a2595cd610c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.759641 4858 generic.go:334] "Generic (PLEG): container finished" podID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerID="a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2" exitCode=137 Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.759689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f54b85744-6qlnr" event={"ID":"3d6b5396-d50b-4f98-a9cd-5a2595cd610c","Type":"ContainerDied","Data":"a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2"} Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.759717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f54b85744-6qlnr" event={"ID":"3d6b5396-d50b-4f98-a9cd-5a2595cd610c","Type":"ContainerDied","Data":"8c6e7938479a6abe935b5c901546924ff4f3131347ef2985b346f144efa2db14"} Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.759735 4858 scope.go:117] "RemoveContainer" containerID="8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.759733 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f54b85744-6qlnr" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.804368 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f54b85744-6qlnr"] Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.829352 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f54b85744-6qlnr"] Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.975141 4858 scope.go:117] "RemoveContainer" containerID="a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.994494 4858 scope.go:117] "RemoveContainer" containerID="8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b" Nov 22 09:32:08 crc kubenswrapper[4858]: E1122 09:32:08.994905 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b\": container with ID starting with 8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b not found: ID does not exist" containerID="8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.994936 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b"} err="failed to get container status \"8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b\": rpc error: code = NotFound desc = could not find container \"8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b\": container with ID starting with 8d3637f5a448647b4e9c7d2b21698c74dfcba86b0b1ba5c91475801cdc0dc29b not found: ID does not exist" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.994960 4858 scope.go:117] "RemoveContainer" containerID="a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2" Nov 22 09:32:08 crc kubenswrapper[4858]: E1122 09:32:08.995533 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2\": container with ID starting with a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2 not found: ID does not exist" containerID="a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2" Nov 22 09:32:08 crc kubenswrapper[4858]: I1122 09:32:08.995566 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2"} err="failed to get container status \"a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2\": rpc error: code = NotFound desc = could not find container \"a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2\": container with ID starting with a888bb10a29fab83b65a332e5f4535976be60698e115aa7602ac75a1df9685b2 not found: ID does not exist" Nov 22 09:32:09 crc kubenswrapper[4858]: I1122 09:32:09.548429 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" path="/var/lib/kubelet/pods/3d6b5396-d50b-4f98-a9cd-5a2595cd610c/volumes" Nov 22 09:32:19 crc kubenswrapper[4858]: I1122 09:32:19.060382 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jk4xd"] Nov 22 09:32:19 crc kubenswrapper[4858]: I1122 09:32:19.071437 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jk4xd"] Nov 22 09:32:19 crc kubenswrapper[4858]: I1122 09:32:19.084297 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-04c4-account-create-kn5lf"] Nov 22 09:32:19 crc kubenswrapper[4858]: I1122 09:32:19.094797 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-04c4-account-create-kn5lf"] Nov 22 09:32:19 crc kubenswrapper[4858]: I1122 09:32:19.553963 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1031cca4-1d3b-4294-98fa-88f044db7bcb" path="/var/lib/kubelet/pods/1031cca4-1d3b-4294-98fa-88f044db7bcb/volumes" Nov 22 09:32:19 crc kubenswrapper[4858]: I1122 09:32:19.554845 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e6d022-50db-4869-ae72-e3a3b392654c" path="/var/lib/kubelet/pods/76e6d022-50db-4869-ae72-e3a3b392654c/volumes" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.321659 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69c86c95b8-8h6xv"] Nov 22 09:32:20 crc kubenswrapper[4858]: E1122 09:32:20.322347 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: E1122 09:32:20.322379 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322387 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: E1122 09:32:20.322395 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322401 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: E1122 09:32:20.322417 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322422 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: E1122 09:32:20.322435 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322441 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: E1122 09:32:20.322457 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322462 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322624 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322639 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322652 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322662 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f506ad00-03d0-4ac6-b172-2ff0f667abe5" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322673 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0606c542-bce1-4395-9dd0-e969035176e8" containerName="horizon-log" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.322689 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6b5396-d50b-4f98-a9cd-5a2595cd610c" containerName="horizon" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.323606 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.342579 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c86c95b8-8h6xv"] Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.429887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404aa4-d838-4368-9e25-6648adde67ee-logs\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.429992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-config-data\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.430106 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-scripts\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.430299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zqd\" (UniqueName: \"kubernetes.io/projected/ee404aa4-d838-4368-9e25-6648adde67ee-kube-api-access-78zqd\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.430507 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-combined-ca-bundle\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.430592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-secret-key\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.430655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-tls-certs\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.533609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78zqd\" (UniqueName: \"kubernetes.io/projected/ee404aa4-d838-4368-9e25-6648adde67ee-kube-api-access-78zqd\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.533723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-combined-ca-bundle\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.533772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-secret-key\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.533826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-tls-certs\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.533900 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404aa4-d838-4368-9e25-6648adde67ee-logs\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.533945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-config-data\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.534050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-scripts\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.534442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404aa4-d838-4368-9e25-6648adde67ee-logs\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.541173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-config-data\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.544542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-combined-ca-bundle\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.544658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-tls-certs\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.544745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-secret-key\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.552134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78zqd\" (UniqueName: \"kubernetes.io/projected/ee404aa4-d838-4368-9e25-6648adde67ee-kube-api-access-78zqd\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.556439 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-scripts\") pod \"horizon-69c86c95b8-8h6xv\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:20 crc kubenswrapper[4858]: I1122 09:32:20.693355 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.161338 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c86c95b8-8h6xv"] Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.682299 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-mzr28"] Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.684234 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mzr28" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.692339 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mzr28"] Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.783271 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-aca4-account-create-kb47r"] Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.785234 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.787834 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.793204 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-aca4-account-create-kb47r"] Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.869718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726135ea-5ba7-49da-ac47-303d08f1ac58-operator-scripts\") pod \"heat-db-create-mzr28\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " pod="openstack/heat-db-create-mzr28" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.870028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xprx\" (UniqueName: \"kubernetes.io/projected/726135ea-5ba7-49da-ac47-303d08f1ac58-kube-api-access-8xprx\") pod \"heat-db-create-mzr28\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " pod="openstack/heat-db-create-mzr28" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.902803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c86c95b8-8h6xv" event={"ID":"ee404aa4-d838-4368-9e25-6648adde67ee","Type":"ContainerStarted","Data":"8ed3ae5cedd53bd3a69e7e010ea65e7a6fc66b139c069cae1957b6aaf00b873d"} Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.902852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c86c95b8-8h6xv" event={"ID":"ee404aa4-d838-4368-9e25-6648adde67ee","Type":"ContainerStarted","Data":"6a0b3388f3b07e344f0aa419e784922d216578af23d8b90ea1471324a5e1ccfa"} Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.902863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c86c95b8-8h6xv" event={"ID":"ee404aa4-d838-4368-9e25-6648adde67ee","Type":"ContainerStarted","Data":"f6a8cdf219586b4cf7e0346640250bb59e7afdf0fe9ec129978130c3bee06d73"} Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.971628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlsj\" (UniqueName: \"kubernetes.io/projected/04ad0b71-b272-4af6-a216-5fd4432bb7d7-kube-api-access-mqlsj\") pod \"heat-aca4-account-create-kb47r\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.971865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726135ea-5ba7-49da-ac47-303d08f1ac58-operator-scripts\") pod \"heat-db-create-mzr28\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " pod="openstack/heat-db-create-mzr28" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.972701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726135ea-5ba7-49da-ac47-303d08f1ac58-operator-scripts\") pod \"heat-db-create-mzr28\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " pod="openstack/heat-db-create-mzr28" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.972773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ad0b71-b272-4af6-a216-5fd4432bb7d7-operator-scripts\") pod \"heat-aca4-account-create-kb47r\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.972964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xprx\" (UniqueName: \"kubernetes.io/projected/726135ea-5ba7-49da-ac47-303d08f1ac58-kube-api-access-8xprx\") pod \"heat-db-create-mzr28\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " pod="openstack/heat-db-create-mzr28" Nov 22 09:32:21 crc kubenswrapper[4858]: I1122 09:32:21.998981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xprx\" (UniqueName: \"kubernetes.io/projected/726135ea-5ba7-49da-ac47-303d08f1ac58-kube-api-access-8xprx\") pod \"heat-db-create-mzr28\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " pod="openstack/heat-db-create-mzr28" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.009827 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mzr28" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.076312 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ad0b71-b272-4af6-a216-5fd4432bb7d7-operator-scripts\") pod \"heat-aca4-account-create-kb47r\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.076457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlsj\" (UniqueName: \"kubernetes.io/projected/04ad0b71-b272-4af6-a216-5fd4432bb7d7-kube-api-access-mqlsj\") pod \"heat-aca4-account-create-kb47r\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.077827 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ad0b71-b272-4af6-a216-5fd4432bb7d7-operator-scripts\") pod \"heat-aca4-account-create-kb47r\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.107302 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlsj\" (UniqueName: \"kubernetes.io/projected/04ad0b71-b272-4af6-a216-5fd4432bb7d7-kube-api-access-mqlsj\") pod \"heat-aca4-account-create-kb47r\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.404813 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.548843 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69c86c95b8-8h6xv" podStartSLOduration=2.548804884 podStartE2EDuration="2.548804884s" podCreationTimestamp="2025-11-22 09:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:32:21.934520496 +0000 UTC m=+8503.775943542" watchObservedRunningTime="2025-11-22 09:32:22.548804884 +0000 UTC m=+8504.390227910" Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.553567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mzr28"] Nov 22 09:32:22 crc kubenswrapper[4858]: W1122 09:32:22.559828 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726135ea_5ba7_49da_ac47_303d08f1ac58.slice/crio-1280087350bee601c9405f0ab6194380ad93be9d021ac592540748daee1ede57 WatchSource:0}: Error finding container 1280087350bee601c9405f0ab6194380ad93be9d021ac592540748daee1ede57: Status 404 returned error can't find the container with id 1280087350bee601c9405f0ab6194380ad93be9d021ac592540748daee1ede57 Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.883201 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-aca4-account-create-kb47r"] Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.916444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mzr28" event={"ID":"726135ea-5ba7-49da-ac47-303d08f1ac58","Type":"ContainerStarted","Data":"60a7697719dcfe5cae1572c1e36b77399083669158f5e42f6d05ab4268425eff"} Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.916494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mzr28" event={"ID":"726135ea-5ba7-49da-ac47-303d08f1ac58","Type":"ContainerStarted","Data":"1280087350bee601c9405f0ab6194380ad93be9d021ac592540748daee1ede57"} Nov 22 09:32:22 crc kubenswrapper[4858]: I1122 09:32:22.937846 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-mzr28" podStartSLOduration=1.937821924 podStartE2EDuration="1.937821924s" podCreationTimestamp="2025-11-22 09:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:32:22.933678672 +0000 UTC m=+8504.775101688" watchObservedRunningTime="2025-11-22 09:32:22.937821924 +0000 UTC m=+8504.779244950" Nov 22 09:32:23 crc kubenswrapper[4858]: W1122 09:32:23.370081 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04ad0b71_b272_4af6_a216_5fd4432bb7d7.slice/crio-89ae5e1d21609035b061619837ab01edc13b33a45c3dfbbf154bb74de87a0ad6 WatchSource:0}: Error finding container 89ae5e1d21609035b061619837ab01edc13b33a45c3dfbbf154bb74de87a0ad6: Status 404 returned error can't find the container with id 89ae5e1d21609035b061619837ab01edc13b33a45c3dfbbf154bb74de87a0ad6 Nov 22 09:32:23 crc kubenswrapper[4858]: I1122 09:32:23.926411 4858 generic.go:334] "Generic (PLEG): container finished" podID="726135ea-5ba7-49da-ac47-303d08f1ac58" containerID="60a7697719dcfe5cae1572c1e36b77399083669158f5e42f6d05ab4268425eff" exitCode=0 Nov 22 09:32:23 crc kubenswrapper[4858]: I1122 09:32:23.926612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mzr28" event={"ID":"726135ea-5ba7-49da-ac47-303d08f1ac58","Type":"ContainerDied","Data":"60a7697719dcfe5cae1572c1e36b77399083669158f5e42f6d05ab4268425eff"} Nov 22 09:32:23 crc kubenswrapper[4858]: I1122 09:32:23.929259 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-aca4-account-create-kb47r" event={"ID":"04ad0b71-b272-4af6-a216-5fd4432bb7d7","Type":"ContainerStarted","Data":"cd325d1c2af1c603c1fe84df51a0ecd6724e440095165ec34cc4c4d521a1494f"} Nov 22 09:32:23 crc kubenswrapper[4858]: I1122 09:32:23.929287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-aca4-account-create-kb47r" event={"ID":"04ad0b71-b272-4af6-a216-5fd4432bb7d7","Type":"ContainerStarted","Data":"89ae5e1d21609035b061619837ab01edc13b33a45c3dfbbf154bb74de87a0ad6"} Nov 22 09:32:23 crc kubenswrapper[4858]: I1122 09:32:23.970967 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-aca4-account-create-kb47r" podStartSLOduration=2.970939226 podStartE2EDuration="2.970939226s" podCreationTimestamp="2025-11-22 09:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:32:23.966186634 +0000 UTC m=+8505.807609640" watchObservedRunningTime="2025-11-22 09:32:23.970939226 +0000 UTC m=+8505.812362232" Nov 22 09:32:24 crc kubenswrapper[4858]: I1122 09:32:24.940126 4858 generic.go:334] "Generic (PLEG): container finished" podID="04ad0b71-b272-4af6-a216-5fd4432bb7d7" containerID="cd325d1c2af1c603c1fe84df51a0ecd6724e440095165ec34cc4c4d521a1494f" exitCode=0 Nov 22 09:32:24 crc kubenswrapper[4858]: I1122 09:32:24.940228 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-aca4-account-create-kb47r" event={"ID":"04ad0b71-b272-4af6-a216-5fd4432bb7d7","Type":"ContainerDied","Data":"cd325d1c2af1c603c1fe84df51a0ecd6724e440095165ec34cc4c4d521a1494f"} Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.560596 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mzr28" Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.749723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xprx\" (UniqueName: \"kubernetes.io/projected/726135ea-5ba7-49da-ac47-303d08f1ac58-kube-api-access-8xprx\") pod \"726135ea-5ba7-49da-ac47-303d08f1ac58\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.749827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726135ea-5ba7-49da-ac47-303d08f1ac58-operator-scripts\") pod \"726135ea-5ba7-49da-ac47-303d08f1ac58\" (UID: \"726135ea-5ba7-49da-ac47-303d08f1ac58\") " Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.750271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726135ea-5ba7-49da-ac47-303d08f1ac58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "726135ea-5ba7-49da-ac47-303d08f1ac58" (UID: "726135ea-5ba7-49da-ac47-303d08f1ac58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.750814 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/726135ea-5ba7-49da-ac47-303d08f1ac58-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.757227 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726135ea-5ba7-49da-ac47-303d08f1ac58-kube-api-access-8xprx" (OuterVolumeSpecName: "kube-api-access-8xprx") pod "726135ea-5ba7-49da-ac47-303d08f1ac58" (UID: "726135ea-5ba7-49da-ac47-303d08f1ac58"). InnerVolumeSpecName "kube-api-access-8xprx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.853941 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xprx\" (UniqueName: \"kubernetes.io/projected/726135ea-5ba7-49da-ac47-303d08f1ac58-kube-api-access-8xprx\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.956064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mzr28" event={"ID":"726135ea-5ba7-49da-ac47-303d08f1ac58","Type":"ContainerDied","Data":"1280087350bee601c9405f0ab6194380ad93be9d021ac592540748daee1ede57"} Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.956135 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1280087350bee601c9405f0ab6194380ad93be9d021ac592540748daee1ede57" Nov 22 09:32:25 crc kubenswrapper[4858]: I1122 09:32:25.956157 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mzr28" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.374574 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.572528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ad0b71-b272-4af6-a216-5fd4432bb7d7-operator-scripts\") pod \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.573100 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqlsj\" (UniqueName: \"kubernetes.io/projected/04ad0b71-b272-4af6-a216-5fd4432bb7d7-kube-api-access-mqlsj\") pod \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\" (UID: \"04ad0b71-b272-4af6-a216-5fd4432bb7d7\") " Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.573621 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ad0b71-b272-4af6-a216-5fd4432bb7d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04ad0b71-b272-4af6-a216-5fd4432bb7d7" (UID: "04ad0b71-b272-4af6-a216-5fd4432bb7d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.578068 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ad0b71-b272-4af6-a216-5fd4432bb7d7-kube-api-access-mqlsj" (OuterVolumeSpecName: "kube-api-access-mqlsj") pod "04ad0b71-b272-4af6-a216-5fd4432bb7d7" (UID: "04ad0b71-b272-4af6-a216-5fd4432bb7d7"). InnerVolumeSpecName "kube-api-access-mqlsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.675656 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqlsj\" (UniqueName: \"kubernetes.io/projected/04ad0b71-b272-4af6-a216-5fd4432bb7d7-kube-api-access-mqlsj\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.675695 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ad0b71-b272-4af6-a216-5fd4432bb7d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.975089 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-aca4-account-create-kb47r" event={"ID":"04ad0b71-b272-4af6-a216-5fd4432bb7d7","Type":"ContainerDied","Data":"89ae5e1d21609035b061619837ab01edc13b33a45c3dfbbf154bb74de87a0ad6"} Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.975146 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ae5e1d21609035b061619837ab01edc13b33a45c3dfbbf154bb74de87a0ad6" Nov 22 09:32:26 crc kubenswrapper[4858]: I1122 09:32:26.975220 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-aca4-account-create-kb47r" Nov 22 09:32:30 crc kubenswrapper[4858]: I1122 09:32:30.040724 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jwksx"] Nov 22 09:32:30 crc kubenswrapper[4858]: I1122 09:32:30.053828 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jwksx"] Nov 22 09:32:30 crc kubenswrapper[4858]: I1122 09:32:30.694350 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:30 crc kubenswrapper[4858]: I1122 09:32:30.694418 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.556231 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54fead0-d92a-4f12-aed2-1266e9cc962b" path="/var/lib/kubelet/pods/c54fead0-d92a-4f12-aed2-1266e9cc962b/volumes" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.862814 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gsgmr"] Nov 22 09:32:31 crc kubenswrapper[4858]: E1122 09:32:31.863201 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726135ea-5ba7-49da-ac47-303d08f1ac58" containerName="mariadb-database-create" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.863221 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="726135ea-5ba7-49da-ac47-303d08f1ac58" containerName="mariadb-database-create" Nov 22 09:32:31 crc kubenswrapper[4858]: E1122 09:32:31.863245 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ad0b71-b272-4af6-a216-5fd4432bb7d7" containerName="mariadb-account-create" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.863253 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ad0b71-b272-4af6-a216-5fd4432bb7d7" containerName="mariadb-account-create" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.863447 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ad0b71-b272-4af6-a216-5fd4432bb7d7" containerName="mariadb-account-create" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.863477 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="726135ea-5ba7-49da-ac47-303d08f1ac58" containerName="mariadb-database-create" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.864044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.866568 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.872633 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-fwz9w" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.883460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gsgmr"] Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.990176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkf5d\" (UniqueName: \"kubernetes.io/projected/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-kube-api-access-nkf5d\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.990420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-combined-ca-bundle\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:31 crc kubenswrapper[4858]: I1122 09:32:31.990609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-config-data\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.092568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-combined-ca-bundle\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.093134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-config-data\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.093296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkf5d\" (UniqueName: \"kubernetes.io/projected/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-kube-api-access-nkf5d\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.101753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-config-data\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.113210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-combined-ca-bundle\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.126873 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkf5d\" (UniqueName: \"kubernetes.io/projected/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-kube-api-access-nkf5d\") pod \"heat-db-sync-gsgmr\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.234354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsgmr" Nov 22 09:32:32 crc kubenswrapper[4858]: I1122 09:32:32.689880 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gsgmr"] Nov 22 09:32:33 crc kubenswrapper[4858]: I1122 09:32:33.036421 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsgmr" event={"ID":"2c75b585-b5a6-4cc7-a73d-8a56862d2aef","Type":"ContainerStarted","Data":"7b73d21d4b0f2dae0833b7908730185af33dba8d22b8cba502c389f662a18db7"} Nov 22 09:32:37 crc kubenswrapper[4858]: I1122 09:32:37.688555 4858 scope.go:117] "RemoveContainer" containerID="40ccf03462a8b5ea103a8d94847af5b050bcec5f047d293e64c5bb2b02000f3d" Nov 22 09:32:37 crc kubenswrapper[4858]: I1122 09:32:37.722709 4858 scope.go:117] "RemoveContainer" containerID="815846bf78109a07a1ed5511078ecc9e2d58555343c5df1832a12e9e1ef085a0" Nov 22 09:32:37 crc kubenswrapper[4858]: I1122 09:32:37.793271 4858 scope.go:117] "RemoveContainer" containerID="f9467c2155d4788c8502584e12f867bf1fc8d6a67ca589b9132124df9f592c10" Nov 22 09:32:40 crc kubenswrapper[4858]: I1122 09:32:40.695481 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69c86c95b8-8h6xv" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Nov 22 09:32:53 crc kubenswrapper[4858]: I1122 09:32:53.641494 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:55 crc kubenswrapper[4858]: I1122 09:32:55.268448 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:32:55 crc kubenswrapper[4858]: I1122 09:32:55.358343 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65ddb89f8-tmrff"] Nov 22 09:32:55 crc kubenswrapper[4858]: I1122 09:32:55.358592 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65ddb89f8-tmrff" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon-log" containerID="cri-o://2a918e4fcbaf137796a0b1904f7b3db87767ea1e2d0f817872ea17bfa2bef504" gracePeriod=30 Nov 22 09:32:55 crc kubenswrapper[4858]: I1122 09:32:55.358871 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65ddb89f8-tmrff" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" containerID="cri-o://72ca74897600f02e691c62c445465e8f3abb5ca42e51913fe100ca110fa41167" gracePeriod=30 Nov 22 09:32:59 crc kubenswrapper[4858]: I1122 09:32:59.337763 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerID="72ca74897600f02e691c62c445465e8f3abb5ca42e51913fe100ca110fa41167" exitCode=0 Nov 22 09:32:59 crc kubenswrapper[4858]: I1122 09:32:59.337823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ddb89f8-tmrff" event={"ID":"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96","Type":"ContainerDied","Data":"72ca74897600f02e691c62c445465e8f3abb5ca42e51913fe100ca110fa41167"} Nov 22 09:33:00 crc kubenswrapper[4858]: I1122 09:33:00.054001 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b93-account-create-t58xd"] Nov 22 09:33:00 crc kubenswrapper[4858]: I1122 09:33:00.071265 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6srlp"] Nov 22 09:33:00 crc kubenswrapper[4858]: I1122 09:33:00.079784 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6b93-account-create-t58xd"] Nov 22 09:33:00 crc kubenswrapper[4858]: I1122 09:33:00.090483 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6srlp"] Nov 22 09:33:01 crc kubenswrapper[4858]: I1122 09:33:01.545692 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715" path="/var/lib/kubelet/pods/29ec3f5a-a8ce-4b39-94ea-2f0cb16c9715/volumes" Nov 22 09:33:01 crc kubenswrapper[4858]: I1122 09:33:01.546638 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37256fab-1ed7-4d0e-92f1-eead13a7c3b6" path="/var/lib/kubelet/pods/37256fab-1ed7-4d0e-92f1-eead13a7c3b6/volumes" Nov 22 09:33:01 crc kubenswrapper[4858]: E1122 09:33:01.564841 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761" Nov 22 09:33:01 crc kubenswrapper[4858]: E1122 09:33:01.564894 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761" Nov 22 09:33:01 crc kubenswrapper[4858]: E1122 09:33:01.565028 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkf5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-gsgmr_openstack(2c75b585-b5a6-4cc7-a73d-8a56862d2aef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 09:33:01 crc kubenswrapper[4858]: E1122 09:33:01.566271 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-gsgmr" podUID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" Nov 22 09:33:02 crc kubenswrapper[4858]: E1122 09:33:02.380957 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761\\\"\"" pod="openstack/heat-db-sync-gsgmr" podUID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" Nov 22 09:33:04 crc kubenswrapper[4858]: I1122 09:33:04.578565 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65ddb89f8-tmrff" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.120:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8443: connect: connection refused" Nov 22 09:33:09 crc kubenswrapper[4858]: I1122 09:33:09.030634 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gvpps"] Nov 22 09:33:09 crc kubenswrapper[4858]: I1122 09:33:09.043706 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gvpps"] Nov 22 09:33:09 crc kubenswrapper[4858]: I1122 09:33:09.547804 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d" path="/var/lib/kubelet/pods/e5e9a1d9-1aa6-4b59-82b5-bba9b099c94d/volumes" Nov 22 09:33:14 crc kubenswrapper[4858]: I1122 09:33:14.578467 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65ddb89f8-tmrff" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.120:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8443: connect: connection refused" Nov 22 09:33:17 crc kubenswrapper[4858]: I1122 09:33:17.568226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsgmr" event={"ID":"2c75b585-b5a6-4cc7-a73d-8a56862d2aef","Type":"ContainerStarted","Data":"7847846fa51b831ff0dc10903739c4f5fccbb778f7df1c7d441f41c30798dea3"} Nov 22 09:33:17 crc kubenswrapper[4858]: I1122 09:33:17.596255 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gsgmr" podStartSLOduration=2.959230337 podStartE2EDuration="46.596235946s" podCreationTimestamp="2025-11-22 09:32:31 +0000 UTC" firstStartedPulling="2025-11-22 09:32:32.698113249 +0000 UTC m=+8514.539536255" lastFinishedPulling="2025-11-22 09:33:16.335118868 +0000 UTC m=+8558.176541864" observedRunningTime="2025-11-22 09:33:17.591916258 +0000 UTC m=+8559.433339294" watchObservedRunningTime="2025-11-22 09:33:17.596235946 +0000 UTC m=+8559.437658962" Nov 22 09:33:19 crc kubenswrapper[4858]: I1122 09:33:19.590945 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" containerID="7847846fa51b831ff0dc10903739c4f5fccbb778f7df1c7d441f41c30798dea3" exitCode=0 Nov 22 09:33:19 crc kubenswrapper[4858]: I1122 09:33:19.591038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsgmr" event={"ID":"2c75b585-b5a6-4cc7-a73d-8a56862d2aef","Type":"ContainerDied","Data":"7847846fa51b831ff0dc10903739c4f5fccbb778f7df1c7d441f41c30798dea3"} Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.037547 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsgmr" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.165263 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-combined-ca-bundle\") pod \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.165368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-config-data\") pod \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.165442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkf5d\" (UniqueName: \"kubernetes.io/projected/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-kube-api-access-nkf5d\") pod \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\" (UID: \"2c75b585-b5a6-4cc7-a73d-8a56862d2aef\") " Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.170207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-kube-api-access-nkf5d" (OuterVolumeSpecName: "kube-api-access-nkf5d") pod "2c75b585-b5a6-4cc7-a73d-8a56862d2aef" (UID: "2c75b585-b5a6-4cc7-a73d-8a56862d2aef"). InnerVolumeSpecName "kube-api-access-nkf5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.191481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c75b585-b5a6-4cc7-a73d-8a56862d2aef" (UID: "2c75b585-b5a6-4cc7-a73d-8a56862d2aef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.231999 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-config-data" (OuterVolumeSpecName: "config-data") pod "2c75b585-b5a6-4cc7-a73d-8a56862d2aef" (UID: "2c75b585-b5a6-4cc7-a73d-8a56862d2aef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.268100 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.268136 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.268148 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkf5d\" (UniqueName: \"kubernetes.io/projected/2c75b585-b5a6-4cc7-a73d-8a56862d2aef-kube-api-access-nkf5d\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.609659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsgmr" event={"ID":"2c75b585-b5a6-4cc7-a73d-8a56862d2aef","Type":"ContainerDied","Data":"7b73d21d4b0f2dae0833b7908730185af33dba8d22b8cba502c389f662a18db7"} Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.609694 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b73d21d4b0f2dae0833b7908730185af33dba8d22b8cba502c389f662a18db7" Nov 22 09:33:21 crc kubenswrapper[4858]: I1122 09:33:21.609712 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsgmr" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.640942 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-85f56dff4b-vjgls"] Nov 22 09:33:22 crc kubenswrapper[4858]: E1122 09:33:22.641737 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" containerName="heat-db-sync" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.641754 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" containerName="heat-db-sync" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.642023 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" containerName="heat-db-sync" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.642863 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.644962 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-fwz9w" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.650875 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.651378 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.667137 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-85f56dff4b-vjgls"] Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.753845 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-77496c8cf4-8zwgc"] Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.754959 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.758353 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.773012 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77496c8cf4-8zwgc"] Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.810280 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-combined-ca-bundle\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.810513 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpqs\" (UniqueName: \"kubernetes.io/projected/29996bc1-83cc-4148-a49a-80fb702c15d8-kube-api-access-4wpqs\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.810552 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data-custom\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.810760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.820518 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6d76dbc6-g457m"] Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.822733 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.825580 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.841683 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d76dbc6-g457m"] Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937017 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-combined-ca-bundle\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-combined-ca-bundle\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-combined-ca-bundle\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937142 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data-custom\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpqs\" (UniqueName: \"kubernetes.io/projected/29996bc1-83cc-4148-a49a-80fb702c15d8-kube-api-access-4wpqs\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data-custom\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937245 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data-custom\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.937382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmhm\" (UniqueName: \"kubernetes.io/projected/403d2731-4f70-4fff-977f-edc2201aaeb0-kube-api-access-4tmhm\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.938095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.938144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9nk\" (UniqueName: \"kubernetes.io/projected/7fc3863f-1077-45d2-9943-d321bdfc0b83-kube-api-access-hk9nk\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.943889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data-custom\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.945091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.945272 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-combined-ca-bundle\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:22 crc kubenswrapper[4858]: I1122 09:33:22.964944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpqs\" (UniqueName: \"kubernetes.io/projected/29996bc1-83cc-4148-a49a-80fb702c15d8-kube-api-access-4wpqs\") pod \"heat-engine-85f56dff4b-vjgls\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tmhm\" (UniqueName: \"kubernetes.io/projected/403d2731-4f70-4fff-977f-edc2201aaeb0-kube-api-access-4tmhm\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk9nk\" (UniqueName: \"kubernetes.io/projected/7fc3863f-1077-45d2-9943-d321bdfc0b83-kube-api-access-hk9nk\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-combined-ca-bundle\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-combined-ca-bundle\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data-custom\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040752 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data-custom\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.040818 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.046173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-combined-ca-bundle\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.048334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.048967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data-custom\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.049507 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-combined-ca-bundle\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.051015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data-custom\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.054189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.063029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tmhm\" (UniqueName: \"kubernetes.io/projected/403d2731-4f70-4fff-977f-edc2201aaeb0-kube-api-access-4tmhm\") pod \"heat-cfnapi-6d76dbc6-g457m\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.066031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk9nk\" (UniqueName: \"kubernetes.io/projected/7fc3863f-1077-45d2-9943-d321bdfc0b83-kube-api-access-hk9nk\") pod \"heat-api-77496c8cf4-8zwgc\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.087142 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.142105 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.264707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.643293 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77496c8cf4-8zwgc"] Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.756772 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d76dbc6-g457m"] Nov 22 09:33:23 crc kubenswrapper[4858]: W1122 09:33:23.757899 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403d2731_4f70_4fff_977f_edc2201aaeb0.slice/crio-426a627cce223f6c4fe836813a826af5e5f44f50107a5a1668de0c1ae74ad325 WatchSource:0}: Error finding container 426a627cce223f6c4fe836813a826af5e5f44f50107a5a1668de0c1ae74ad325: Status 404 returned error can't find the container with id 426a627cce223f6c4fe836813a826af5e5f44f50107a5a1668de0c1ae74ad325 Nov 22 09:33:23 crc kubenswrapper[4858]: I1122 09:33:23.873011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-85f56dff4b-vjgls"] Nov 22 09:33:23 crc kubenswrapper[4858]: W1122 09:33:23.874828 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29996bc1_83cc_4148_a49a_80fb702c15d8.slice/crio-c1e1c530941aae0cff0a7b13b4be1586c5201bc2fcf1de0ee22075923c032242 WatchSource:0}: Error finding container c1e1c530941aae0cff0a7b13b4be1586c5201bc2fcf1de0ee22075923c032242: Status 404 returned error can't find the container with id c1e1c530941aae0cff0a7b13b4be1586c5201bc2fcf1de0ee22075923c032242 Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.578288 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65ddb89f8-tmrff" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.120:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.120:8443: connect: connection refused" Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.578768 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.639829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85f56dff4b-vjgls" event={"ID":"29996bc1-83cc-4148-a49a-80fb702c15d8","Type":"ContainerStarted","Data":"4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd"} Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.639891 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85f56dff4b-vjgls" event={"ID":"29996bc1-83cc-4148-a49a-80fb702c15d8","Type":"ContainerStarted","Data":"c1e1c530941aae0cff0a7b13b4be1586c5201bc2fcf1de0ee22075923c032242"} Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.641154 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.646363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d76dbc6-g457m" event={"ID":"403d2731-4f70-4fff-977f-edc2201aaeb0","Type":"ContainerStarted","Data":"426a627cce223f6c4fe836813a826af5e5f44f50107a5a1668de0c1ae74ad325"} Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.650925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77496c8cf4-8zwgc" event={"ID":"7fc3863f-1077-45d2-9943-d321bdfc0b83","Type":"ContainerStarted","Data":"9b4b8bdded5a1eacb384b895ae1371a2427ebde2c08876a70f03c331f217b3e4"} Nov 22 09:33:24 crc kubenswrapper[4858]: I1122 09:33:24.659214 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-85f56dff4b-vjgls" podStartSLOduration=2.65919615 podStartE2EDuration="2.65919615s" podCreationTimestamp="2025-11-22 09:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:33:24.654093456 +0000 UTC m=+8566.495516462" watchObservedRunningTime="2025-11-22 09:33:24.65919615 +0000 UTC m=+8566.500619156" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.678343 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerID="2a918e4fcbaf137796a0b1904f7b3db87767ea1e2d0f817872ea17bfa2bef504" exitCode=137 Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.678399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ddb89f8-tmrff" event={"ID":"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96","Type":"ContainerDied","Data":"2a918e4fcbaf137796a0b1904f7b3db87767ea1e2d0f817872ea17bfa2bef504"} Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.771569 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900148 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-scripts\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-logs\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900272 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-combined-ca-bundle\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900306 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-tls-certs\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-secret-key\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-config-data\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900502 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7jbv\" (UniqueName: \"kubernetes.io/projected/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-kube-api-access-f7jbv\") pod \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\" (UID: \"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96\") " Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.900976 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-logs" (OuterVolumeSpecName: "logs") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.908950 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.911580 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-kube-api-access-f7jbv" (OuterVolumeSpecName: "kube-api-access-f7jbv") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "kube-api-access-f7jbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.933281 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-config-data" (OuterVolumeSpecName: "config-data") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.933729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-scripts" (OuterVolumeSpecName: "scripts") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.945495 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:25 crc kubenswrapper[4858]: I1122 09:33:25.962435 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" (UID: "2ea7f39a-d3f5-4fc7-b08e-075d7806ba96"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.002458 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.002728 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.003001 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.003108 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.003202 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.003275 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.003358 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7jbv\" (UniqueName: \"kubernetes.io/projected/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96-kube-api-access-f7jbv\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.688257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65ddb89f8-tmrff" event={"ID":"2ea7f39a-d3f5-4fc7-b08e-075d7806ba96","Type":"ContainerDied","Data":"e3669bb7e2804594052dbbfe6987a2e1eda53b769d6992c5d6e3251c1fc00d2d"} Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.688600 4858 scope.go:117] "RemoveContainer" containerID="72ca74897600f02e691c62c445465e8f3abb5ca42e51913fe100ca110fa41167" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.688340 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65ddb89f8-tmrff" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.694653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d76dbc6-g457m" event={"ID":"403d2731-4f70-4fff-977f-edc2201aaeb0","Type":"ContainerStarted","Data":"d305b78231f0fda866ea34be93e3b496ad8b9d4533e5bdeca6f6da622abda866"} Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.695063 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.699351 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77496c8cf4-8zwgc" event={"ID":"7fc3863f-1077-45d2-9943-d321bdfc0b83","Type":"ContainerStarted","Data":"ac9f004d6f7c2d6e36da03a252d33082a3e3c151bcc52ac2ee446694e5329639"} Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.699435 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.731955 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6d76dbc6-g457m" podStartSLOduration=3.003255581 podStartE2EDuration="4.731921012s" podCreationTimestamp="2025-11-22 09:33:22 +0000 UTC" firstStartedPulling="2025-11-22 09:33:23.759932311 +0000 UTC m=+8565.601355317" lastFinishedPulling="2025-11-22 09:33:25.488597742 +0000 UTC m=+8567.330020748" observedRunningTime="2025-11-22 09:33:26.715236688 +0000 UTC m=+8568.556659694" watchObservedRunningTime="2025-11-22 09:33:26.731921012 +0000 UTC m=+8568.573344018" Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.755056 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65ddb89f8-tmrff"] Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.770691 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65ddb89f8-tmrff"] Nov 22 09:33:26 crc kubenswrapper[4858]: I1122 09:33:26.778859 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-77496c8cf4-8zwgc" podStartSLOduration=2.960129319 podStartE2EDuration="4.778835003s" podCreationTimestamp="2025-11-22 09:33:22 +0000 UTC" firstStartedPulling="2025-11-22 09:33:23.65741152 +0000 UTC m=+8565.498834526" lastFinishedPulling="2025-11-22 09:33:25.476117204 +0000 UTC m=+8567.317540210" observedRunningTime="2025-11-22 09:33:26.769294218 +0000 UTC m=+8568.610717224" watchObservedRunningTime="2025-11-22 09:33:26.778835003 +0000 UTC m=+8568.620258009" Nov 22 09:33:26 crc kubenswrapper[4858]: E1122 09:33:26.844730 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea7f39a_d3f5_4fc7_b08e_075d7806ba96.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea7f39a_d3f5_4fc7_b08e_075d7806ba96.slice/crio-e3669bb7e2804594052dbbfe6987a2e1eda53b769d6992c5d6e3251c1fc00d2d\": RecentStats: unable to find data in memory cache]" Nov 22 09:33:27 crc kubenswrapper[4858]: I1122 09:33:27.015688 4858 scope.go:117] "RemoveContainer" containerID="2a918e4fcbaf137796a0b1904f7b3db87767ea1e2d0f817872ea17bfa2bef504" Nov 22 09:33:27 crc kubenswrapper[4858]: I1122 09:33:27.552959 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" path="/var/lib/kubelet/pods/2ea7f39a-d3f5-4fc7-b08e-075d7806ba96/volumes" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.701194 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7f4fc69954-bcngv"] Nov 22 09:33:29 crc kubenswrapper[4858]: E1122 09:33:29.701937 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon-log" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.701952 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon-log" Nov 22 09:33:29 crc kubenswrapper[4858]: E1122 09:33:29.701974 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.701980 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.702182 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon-log" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.702194 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea7f39a-d3f5-4fc7-b08e-075d7806ba96" containerName="horizon" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.703006 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.721349 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f4fc69954-bcngv"] Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.754061 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5586944547-98mn9"] Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.773685 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.775942 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-857dbb456d-p5965"] Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.778595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.797360 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5586944547-98mn9"] Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.864014 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-857dbb456d-p5965"] Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.883482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4g7k\" (UniqueName: \"kubernetes.io/projected/f19af9ca-499c-4038-be7c-820ec5c605b4-kube-api-access-l4g7k\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.883565 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data-custom\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-combined-ca-bundle\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884477 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvlxr\" (UniqueName: \"kubernetes.io/projected/354d4cb5-ddbd-4b00-94da-1e52665a46ea-kube-api-access-dvlxr\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884563 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-combined-ca-bundle\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data-custom\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884696 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-combined-ca-bundle\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvtj\" (UniqueName: \"kubernetes.io/projected/a36e4c2a-3eca-4150-867c-937eb02c77f1-kube-api-access-7qvtj\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.884752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data-custom\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data-custom\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4g7k\" (UniqueName: \"kubernetes.io/projected/f19af9ca-499c-4038-be7c-820ec5c605b4-kube-api-access-l4g7k\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data-custom\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-combined-ca-bundle\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986182 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986199 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvlxr\" (UniqueName: \"kubernetes.io/projected/354d4cb5-ddbd-4b00-94da-1e52665a46ea-kube-api-access-dvlxr\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-combined-ca-bundle\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data-custom\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-combined-ca-bundle\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:29 crc kubenswrapper[4858]: I1122 09:33:29.986388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvtj\" (UniqueName: \"kubernetes.io/projected/a36e4c2a-3eca-4150-867c-937eb02c77f1-kube-api-access-7qvtj\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.003309 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-combined-ca-bundle\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.003646 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.004033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.006605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data-custom\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.007125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data-custom\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.007759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-combined-ca-bundle\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.008093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-combined-ca-bundle\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.009707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.025580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data-custom\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.026700 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvtj\" (UniqueName: \"kubernetes.io/projected/a36e4c2a-3eca-4150-867c-937eb02c77f1-kube-api-access-7qvtj\") pod \"heat-engine-7f4fc69954-bcngv\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.027248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvlxr\" (UniqueName: \"kubernetes.io/projected/354d4cb5-ddbd-4b00-94da-1e52665a46ea-kube-api-access-dvlxr\") pod \"heat-api-857dbb456d-p5965\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.031069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4g7k\" (UniqueName: \"kubernetes.io/projected/f19af9ca-499c-4038-be7c-820ec5c605b4-kube-api-access-l4g7k\") pod \"heat-cfnapi-5586944547-98mn9\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.103818 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.122811 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.322353 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:30 crc kubenswrapper[4858]: W1122 09:33:30.749308 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac WatchSource:0}: Error finding container 3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac: Status 404 returned error can't find the container with id 3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.752649 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5586944547-98mn9"] Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.804429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5586944547-98mn9" event={"ID":"f19af9ca-499c-4038-be7c-820ec5c605b4","Type":"ContainerStarted","Data":"3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac"} Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.842244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-857dbb456d-p5965"] Nov 22 09:33:30 crc kubenswrapper[4858]: W1122 09:33:30.852085 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354d4cb5_ddbd_4b00_94da_1e52665a46ea.slice/crio-7c16a5b985cf3719809cf1125e0432c3e0e2301c319c39ccfc5e8c79599ebe47 WatchSource:0}: Error finding container 7c16a5b985cf3719809cf1125e0432c3e0e2301c319c39ccfc5e8c79599ebe47: Status 404 returned error can't find the container with id 7c16a5b985cf3719809cf1125e0432c3e0e2301c319c39ccfc5e8c79599ebe47 Nov 22 09:33:30 crc kubenswrapper[4858]: I1122 09:33:30.960210 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7f4fc69954-bcngv"] Nov 22 09:33:30 crc kubenswrapper[4858]: W1122 09:33:30.961418 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36e4c2a_3eca_4150_867c_937eb02c77f1.slice/crio-1e2729a246c7211df409b15b2837ab43bef49cb3a98d5fd6f2d82d99985778f8 WatchSource:0}: Error finding container 1e2729a246c7211df409b15b2837ab43bef49cb3a98d5fd6f2d82d99985778f8: Status 404 returned error can't find the container with id 1e2729a246c7211df409b15b2837ab43bef49cb3a98d5fd6f2d82d99985778f8 Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.289676 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77496c8cf4-8zwgc"] Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.290399 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-77496c8cf4-8zwgc" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerName="heat-api" containerID="cri-o://ac9f004d6f7c2d6e36da03a252d33082a3e3c151bcc52ac2ee446694e5329639" gracePeriod=60 Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.319381 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-554bc84945-x99pt"] Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.321128 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.323449 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.323701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.329993 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data-custom\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.330058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-combined-ca-bundle\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.330094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.330187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-internal-tls-certs\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.330216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhc8\" (UniqueName: \"kubernetes.io/projected/f559e642-5710-41ad-b508-a76cf28d62ca-kube-api-access-qzhc8\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.330247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-public-tls-certs\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.333961 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d76dbc6-g457m"] Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.334203 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6d76dbc6-g457m" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerName="heat-cfnapi" containerID="cri-o://d305b78231f0fda866ea34be93e3b496ad8b9d4533e5bdeca6f6da622abda866" gracePeriod=60 Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.349643 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-554bc84945-x99pt"] Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.365743 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-77496c8cf4-8zwgc" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.128:8004/healthcheck\": EOF" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.370755 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6d76dbc6-g457m" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.129:8000/healthcheck\": EOF" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.398155 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-58695b9cb9-h2cjl"] Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.402888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.406658 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.406877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.425408 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58695b9cb9-h2cjl"] Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.431623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data-custom\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.431687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-combined-ca-bundle\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.431719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.431807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-internal-tls-certs\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.431834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzhc8\" (UniqueName: \"kubernetes.io/projected/f559e642-5710-41ad-b508-a76cf28d62ca-kube-api-access-qzhc8\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.431862 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-public-tls-certs\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.533468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.533527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data-custom\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.533615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbc9f\" (UniqueName: \"kubernetes.io/projected/c44b3c43-4aed-4726-a49e-693cd279bca6-kube-api-access-lbc9f\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.533663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-combined-ca-bundle\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.533717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-public-tls-certs\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.533783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-internal-tls-certs\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.635427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-public-tls-certs\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.635592 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-internal-tls-certs\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.635636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.635661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data-custom\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.635747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbc9f\" (UniqueName: \"kubernetes.io/projected/c44b3c43-4aed-4726-a49e-693cd279bca6-kube-api-access-lbc9f\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.635806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-combined-ca-bundle\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.766702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-internal-tls-certs\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.766705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-combined-ca-bundle\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.767074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.767825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-public-tls-certs\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.768137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data-custom\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.768210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzhc8\" (UniqueName: \"kubernetes.io/projected/f559e642-5710-41ad-b508-a76cf28d62ca-kube-api-access-qzhc8\") pod \"heat-api-554bc84945-x99pt\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.768740 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-internal-tls-certs\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.768753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data-custom\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.768801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-public-tls-certs\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.769517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-combined-ca-bundle\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.770000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbc9f\" (UniqueName: \"kubernetes.io/projected/c44b3c43-4aed-4726-a49e-693cd279bca6-kube-api-access-lbc9f\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.770285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data\") pod \"heat-cfnapi-58695b9cb9-h2cjl\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.814987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f4fc69954-bcngv" event={"ID":"a36e4c2a-3eca-4150-867c-937eb02c77f1","Type":"ContainerStarted","Data":"907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9"} Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.815035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f4fc69954-bcngv" event={"ID":"a36e4c2a-3eca-4150-867c-937eb02c77f1","Type":"ContainerStarted","Data":"1e2729a246c7211df409b15b2837ab43bef49cb3a98d5fd6f2d82d99985778f8"} Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.816688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-857dbb456d-p5965" event={"ID":"354d4cb5-ddbd-4b00-94da-1e52665a46ea","Type":"ContainerStarted","Data":"52625b1445345b26d5bb3233de3054ea6b225c02b9169fc5be1b97acff557f66"} Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.816729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-857dbb456d-p5965" event={"ID":"354d4cb5-ddbd-4b00-94da-1e52665a46ea","Type":"ContainerStarted","Data":"7c16a5b985cf3719809cf1125e0432c3e0e2301c319c39ccfc5e8c79599ebe47"} Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.816769 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.818373 4858 generic.go:334] "Generic (PLEG): container finished" podID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerID="e3975dfe54f1cfc7e3c56841d85ebccdf8ceae6c7cdd9b854d8a82acd6da39a6" exitCode=1 Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.818410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5586944547-98mn9" event={"ID":"f19af9ca-499c-4038-be7c-820ec5c605b4","Type":"ContainerDied","Data":"e3975dfe54f1cfc7e3c56841d85ebccdf8ceae6c7cdd9b854d8a82acd6da39a6"} Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.818647 4858 scope.go:117] "RemoveContainer" containerID="e3975dfe54f1cfc7e3c56841d85ebccdf8ceae6c7cdd9b854d8a82acd6da39a6" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.852090 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7f4fc69954-bcngv" podStartSLOduration=2.85206891 podStartE2EDuration="2.85206891s" podCreationTimestamp="2025-11-22 09:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:33:31.848899729 +0000 UTC m=+8573.690322735" watchObservedRunningTime="2025-11-22 09:33:31.85206891 +0000 UTC m=+8573.693491916" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.890351 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-857dbb456d-p5965" podStartSLOduration=2.890314305 podStartE2EDuration="2.890314305s" podCreationTimestamp="2025-11-22 09:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:33:31.880883743 +0000 UTC m=+8573.722306759" watchObservedRunningTime="2025-11-22 09:33:31.890314305 +0000 UTC m=+8573.731737311" Nov 22 09:33:31 crc kubenswrapper[4858]: I1122 09:33:31.951438 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.034919 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.499241 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-554bc84945-x99pt"] Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.637812 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58695b9cb9-h2cjl"] Nov 22 09:33:32 crc kubenswrapper[4858]: W1122 09:33:32.706677 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44b3c43_4aed_4726_a49e_693cd279bca6.slice/crio-bcecdda848a13002e10d69cd33551948facb79bc1968678772f24017147be0e3 WatchSource:0}: Error finding container bcecdda848a13002e10d69cd33551948facb79bc1968678772f24017147be0e3: Status 404 returned error can't find the container with id bcecdda848a13002e10d69cd33551948facb79bc1968678772f24017147be0e3 Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.836390 4858 generic.go:334] "Generic (PLEG): container finished" podID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerID="52625b1445345b26d5bb3233de3054ea6b225c02b9169fc5be1b97acff557f66" exitCode=1 Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.836499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-857dbb456d-p5965" event={"ID":"354d4cb5-ddbd-4b00-94da-1e52665a46ea","Type":"ContainerDied","Data":"52625b1445345b26d5bb3233de3054ea6b225c02b9169fc5be1b97acff557f66"} Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.837232 4858 scope.go:117] "RemoveContainer" containerID="52625b1445345b26d5bb3233de3054ea6b225c02b9169fc5be1b97acff557f66" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.843297 4858 generic.go:334] "Generic (PLEG): container finished" podID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerID="1b2db0daca1292981ccc012e7bab0afec85035942862feefcc2a0a6705eade3e" exitCode=1 Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.843603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5586944547-98mn9" event={"ID":"f19af9ca-499c-4038-be7c-820ec5c605b4","Type":"ContainerDied","Data":"1b2db0daca1292981ccc012e7bab0afec85035942862feefcc2a0a6705eade3e"} Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.843728 4858 scope.go:117] "RemoveContainer" containerID="e3975dfe54f1cfc7e3c56841d85ebccdf8ceae6c7cdd9b854d8a82acd6da39a6" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.843937 4858 scope.go:117] "RemoveContainer" containerID="1b2db0daca1292981ccc012e7bab0afec85035942862feefcc2a0a6705eade3e" Nov 22 09:33:32 crc kubenswrapper[4858]: E1122 09:33:32.844241 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5586944547-98mn9_openstack(f19af9ca-499c-4038-be7c-820ec5c605b4)\"" pod="openstack/heat-cfnapi-5586944547-98mn9" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.847508 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" event={"ID":"c44b3c43-4aed-4726-a49e-693cd279bca6","Type":"ContainerStarted","Data":"bcecdda848a13002e10d69cd33551948facb79bc1968678772f24017147be0e3"} Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.854384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554bc84945-x99pt" event={"ID":"f559e642-5710-41ad-b508-a76cf28d62ca","Type":"ContainerStarted","Data":"7842f8034764b2924bf41d03e6be4b569c6b8d8f6de37bcfc0f1067002067f50"} Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.854700 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.855128 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:32 crc kubenswrapper[4858]: I1122 09:33:32.923977 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-554bc84945-x99pt" podStartSLOduration=1.923914092 podStartE2EDuration="1.923914092s" podCreationTimestamp="2025-11-22 09:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:33:32.908655244 +0000 UTC m=+8574.750078250" watchObservedRunningTime="2025-11-22 09:33:32.923914092 +0000 UTC m=+8574.765337098" Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.867409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-857dbb456d-p5965" event={"ID":"354d4cb5-ddbd-4b00-94da-1e52665a46ea","Type":"ContainerStarted","Data":"ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c"} Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.867869 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.871843 4858 scope.go:117] "RemoveContainer" containerID="1b2db0daca1292981ccc012e7bab0afec85035942862feefcc2a0a6705eade3e" Nov 22 09:33:33 crc kubenswrapper[4858]: E1122 09:33:33.872121 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5586944547-98mn9_openstack(f19af9ca-499c-4038-be7c-820ec5c605b4)\"" pod="openstack/heat-cfnapi-5586944547-98mn9" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.875091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" event={"ID":"c44b3c43-4aed-4726-a49e-693cd279bca6","Type":"ContainerStarted","Data":"3e9f60a9242f5ea9166f64aec3d772c195c831a12e40616f61d15e94761b65aa"} Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.875506 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.880194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554bc84945-x99pt" event={"ID":"f559e642-5710-41ad-b508-a76cf28d62ca","Type":"ContainerStarted","Data":"bfc6172709f280143555d90466293bfa2c52e1d1c69bc716075bf79ffcfb671e"} Nov 22 09:33:33 crc kubenswrapper[4858]: I1122 09:33:33.917453 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" podStartSLOduration=2.917429978 podStartE2EDuration="2.917429978s" podCreationTimestamp="2025-11-22 09:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:33:33.916984404 +0000 UTC m=+8575.758407410" watchObservedRunningTime="2025-11-22 09:33:33.917429978 +0000 UTC m=+8575.758852984" Nov 22 09:33:34 crc kubenswrapper[4858]: I1122 09:33:34.892650 4858 generic.go:334] "Generic (PLEG): container finished" podID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerID="ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c" exitCode=1 Nov 22 09:33:34 crc kubenswrapper[4858]: I1122 09:33:34.893179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-857dbb456d-p5965" event={"ID":"354d4cb5-ddbd-4b00-94da-1e52665a46ea","Type":"ContainerDied","Data":"ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c"} Nov 22 09:33:34 crc kubenswrapper[4858]: I1122 09:33:34.893217 4858 scope.go:117] "RemoveContainer" containerID="52625b1445345b26d5bb3233de3054ea6b225c02b9169fc5be1b97acff557f66" Nov 22 09:33:34 crc kubenswrapper[4858]: I1122 09:33:34.893621 4858 scope.go:117] "RemoveContainer" containerID="ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c" Nov 22 09:33:34 crc kubenswrapper[4858]: E1122 09:33:34.893817 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-857dbb456d-p5965_openstack(354d4cb5-ddbd-4b00-94da-1e52665a46ea)\"" pod="openstack/heat-api-857dbb456d-p5965" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" Nov 22 09:33:35 crc kubenswrapper[4858]: I1122 09:33:35.104613 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:35 crc kubenswrapper[4858]: I1122 09:33:35.104699 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:35 crc kubenswrapper[4858]: I1122 09:33:35.105448 4858 scope.go:117] "RemoveContainer" containerID="1b2db0daca1292981ccc012e7bab0afec85035942862feefcc2a0a6705eade3e" Nov 22 09:33:35 crc kubenswrapper[4858]: E1122 09:33:35.105693 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5586944547-98mn9_openstack(f19af9ca-499c-4038-be7c-820ec5c605b4)\"" pod="openstack/heat-cfnapi-5586944547-98mn9" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" Nov 22 09:33:35 crc kubenswrapper[4858]: I1122 09:33:35.124105 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:35 crc kubenswrapper[4858]: I1122 09:33:35.902918 4858 scope.go:117] "RemoveContainer" containerID="ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c" Nov 22 09:33:35 crc kubenswrapper[4858]: E1122 09:33:35.903370 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-857dbb456d-p5965_openstack(354d4cb5-ddbd-4b00-94da-1e52665a46ea)\"" pod="openstack/heat-api-857dbb456d-p5965" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.700258 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-77496c8cf4-8zwgc" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.128:8004/healthcheck\": read tcp 10.217.0.2:40732->10.217.1.128:8004: read: connection reset by peer" Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.771615 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6d76dbc6-g457m" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.129:8000/healthcheck\": read tcp 10.217.0.2:56736->10.217.1.129:8000: read: connection reset by peer" Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.912916 4858 generic.go:334] "Generic (PLEG): container finished" podID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerID="d305b78231f0fda866ea34be93e3b496ad8b9d4533e5bdeca6f6da622abda866" exitCode=0 Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.913006 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d76dbc6-g457m" event={"ID":"403d2731-4f70-4fff-977f-edc2201aaeb0","Type":"ContainerDied","Data":"d305b78231f0fda866ea34be93e3b496ad8b9d4533e5bdeca6f6da622abda866"} Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.916720 4858 generic.go:334] "Generic (PLEG): container finished" podID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerID="ac9f004d6f7c2d6e36da03a252d33082a3e3c151bcc52ac2ee446694e5329639" exitCode=0 Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.917510 4858 scope.go:117] "RemoveContainer" containerID="ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c" Nov 22 09:33:36 crc kubenswrapper[4858]: E1122 09:33:36.917814 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-857dbb456d-p5965_openstack(354d4cb5-ddbd-4b00-94da-1e52665a46ea)\"" pod="openstack/heat-api-857dbb456d-p5965" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" Nov 22 09:33:36 crc kubenswrapper[4858]: I1122 09:33:36.918154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77496c8cf4-8zwgc" event={"ID":"7fc3863f-1077-45d2-9943-d321bdfc0b83","Type":"ContainerDied","Data":"ac9f004d6f7c2d6e36da03a252d33082a3e3c151bcc52ac2ee446694e5329639"} Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.527738 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.532788 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data\") pod \"403d2731-4f70-4fff-977f-edc2201aaeb0\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-combined-ca-bundle\") pod \"403d2731-4f70-4fff-977f-edc2201aaeb0\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615119 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tmhm\" (UniqueName: \"kubernetes.io/projected/403d2731-4f70-4fff-977f-edc2201aaeb0-kube-api-access-4tmhm\") pod \"403d2731-4f70-4fff-977f-edc2201aaeb0\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615145 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data\") pod \"7fc3863f-1077-45d2-9943-d321bdfc0b83\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data-custom\") pod \"7fc3863f-1077-45d2-9943-d321bdfc0b83\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-combined-ca-bundle\") pod \"7fc3863f-1077-45d2-9943-d321bdfc0b83\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk9nk\" (UniqueName: \"kubernetes.io/projected/7fc3863f-1077-45d2-9943-d321bdfc0b83-kube-api-access-hk9nk\") pod \"7fc3863f-1077-45d2-9943-d321bdfc0b83\" (UID: \"7fc3863f-1077-45d2-9943-d321bdfc0b83\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.615419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data-custom\") pod \"403d2731-4f70-4fff-977f-edc2201aaeb0\" (UID: \"403d2731-4f70-4fff-977f-edc2201aaeb0\") " Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.620365 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7fc3863f-1077-45d2-9943-d321bdfc0b83" (UID: "7fc3863f-1077-45d2-9943-d321bdfc0b83"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.620474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "403d2731-4f70-4fff-977f-edc2201aaeb0" (UID: "403d2731-4f70-4fff-977f-edc2201aaeb0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.620523 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc3863f-1077-45d2-9943-d321bdfc0b83-kube-api-access-hk9nk" (OuterVolumeSpecName: "kube-api-access-hk9nk") pod "7fc3863f-1077-45d2-9943-d321bdfc0b83" (UID: "7fc3863f-1077-45d2-9943-d321bdfc0b83"). InnerVolumeSpecName "kube-api-access-hk9nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.626929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403d2731-4f70-4fff-977f-edc2201aaeb0-kube-api-access-4tmhm" (OuterVolumeSpecName: "kube-api-access-4tmhm") pod "403d2731-4f70-4fff-977f-edc2201aaeb0" (UID: "403d2731-4f70-4fff-977f-edc2201aaeb0"). InnerVolumeSpecName "kube-api-access-4tmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.645917 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fc3863f-1077-45d2-9943-d321bdfc0b83" (UID: "7fc3863f-1077-45d2-9943-d321bdfc0b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.650262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "403d2731-4f70-4fff-977f-edc2201aaeb0" (UID: "403d2731-4f70-4fff-977f-edc2201aaeb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.671580 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data" (OuterVolumeSpecName: "config-data") pod "403d2731-4f70-4fff-977f-edc2201aaeb0" (UID: "403d2731-4f70-4fff-977f-edc2201aaeb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.674248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data" (OuterVolumeSpecName: "config-data") pod "7fc3863f-1077-45d2-9943-d321bdfc0b83" (UID: "7fc3863f-1077-45d2-9943-d321bdfc0b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717389 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk9nk\" (UniqueName: \"kubernetes.io/projected/7fc3863f-1077-45d2-9943-d321bdfc0b83-kube-api-access-hk9nk\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717421 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717431 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717439 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/403d2731-4f70-4fff-977f-edc2201aaeb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717447 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tmhm\" (UniqueName: \"kubernetes.io/projected/403d2731-4f70-4fff-977f-edc2201aaeb0-kube-api-access-4tmhm\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717457 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717464 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.717473 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc3863f-1077-45d2-9943-d321bdfc0b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.928777 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d76dbc6-g457m" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.928772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d76dbc6-g457m" event={"ID":"403d2731-4f70-4fff-977f-edc2201aaeb0","Type":"ContainerDied","Data":"426a627cce223f6c4fe836813a826af5e5f44f50107a5a1668de0c1ae74ad325"} Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.928907 4858 scope.go:117] "RemoveContainer" containerID="d305b78231f0fda866ea34be93e3b496ad8b9d4533e5bdeca6f6da622abda866" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.930552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77496c8cf4-8zwgc" event={"ID":"7fc3863f-1077-45d2-9943-d321bdfc0b83","Type":"ContainerDied","Data":"9b4b8bdded5a1eacb384b895ae1371a2427ebde2c08876a70f03c331f217b3e4"} Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.930618 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77496c8cf4-8zwgc" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.946015 4858 scope.go:117] "RemoveContainer" containerID="30f4bdd049593318e80955a4b1cbd854fd71290ca3ed692dae22858c31f0db9f" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.960522 4858 scope.go:117] "RemoveContainer" containerID="ac9f004d6f7c2d6e36da03a252d33082a3e3c151bcc52ac2ee446694e5329639" Nov 22 09:33:37 crc kubenswrapper[4858]: I1122 09:33:37.997130 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d76dbc6-g457m"] Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.010811 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6d76dbc6-g457m"] Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.026591 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-77496c8cf4-8zwgc"] Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.035795 4858 scope.go:117] "RemoveContainer" containerID="d6d7d09344eacdcf93e1263e234a50db949ca2ba8c11ba1deaaadd53c0577551" Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.038228 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-77496c8cf4-8zwgc"] Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.060928 4858 scope.go:117] "RemoveContainer" containerID="254bd3260a9ba1aee7f6ba007d9fdde379fd4e6f756722769fabf067b3e29d32" Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.624717 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:33:38 crc kubenswrapper[4858]: I1122 09:33:38.711172 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5586944547-98mn9"] Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.104049 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.259225 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data\") pod \"f19af9ca-499c-4038-be7c-820ec5c605b4\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.259400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4g7k\" (UniqueName: \"kubernetes.io/projected/f19af9ca-499c-4038-be7c-820ec5c605b4-kube-api-access-l4g7k\") pod \"f19af9ca-499c-4038-be7c-820ec5c605b4\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.259439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data-custom\") pod \"f19af9ca-499c-4038-be7c-820ec5c605b4\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.259600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-combined-ca-bundle\") pod \"f19af9ca-499c-4038-be7c-820ec5c605b4\" (UID: \"f19af9ca-499c-4038-be7c-820ec5c605b4\") " Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.267577 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19af9ca-499c-4038-be7c-820ec5c605b4-kube-api-access-l4g7k" (OuterVolumeSpecName: "kube-api-access-l4g7k") pod "f19af9ca-499c-4038-be7c-820ec5c605b4" (UID: "f19af9ca-499c-4038-be7c-820ec5c605b4"). InnerVolumeSpecName "kube-api-access-l4g7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.268543 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f19af9ca-499c-4038-be7c-820ec5c605b4" (UID: "f19af9ca-499c-4038-be7c-820ec5c605b4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.298600 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f19af9ca-499c-4038-be7c-820ec5c605b4" (UID: "f19af9ca-499c-4038-be7c-820ec5c605b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.336093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data" (OuterVolumeSpecName: "config-data") pod "f19af9ca-499c-4038-be7c-820ec5c605b4" (UID: "f19af9ca-499c-4038-be7c-820ec5c605b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.361630 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.361701 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.361715 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4g7k\" (UniqueName: \"kubernetes.io/projected/f19af9ca-499c-4038-be7c-820ec5c605b4-kube-api-access-l4g7k\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.361725 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19af9ca-499c-4038-be7c-820ec5c605b4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.554782 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" path="/var/lib/kubelet/pods/403d2731-4f70-4fff-977f-edc2201aaeb0/volumes" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.555462 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" path="/var/lib/kubelet/pods/7fc3863f-1077-45d2-9943-d321bdfc0b83/volumes" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.970469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5586944547-98mn9" event={"ID":"f19af9ca-499c-4038-be7c-820ec5c605b4","Type":"ContainerDied","Data":"3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac"} Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.970519 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5586944547-98mn9" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.970711 4858 scope.go:117] "RemoveContainer" containerID="1b2db0daca1292981ccc012e7bab0afec85035942862feefcc2a0a6705eade3e" Nov 22 09:33:39 crc kubenswrapper[4858]: I1122 09:33:39.997988 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5586944547-98mn9"] Nov 22 09:33:40 crc kubenswrapper[4858]: I1122 09:33:40.009360 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5586944547-98mn9"] Nov 22 09:33:41 crc kubenswrapper[4858]: I1122 09:33:41.549391 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" path="/var/lib/kubelet/pods/f19af9ca-499c-4038-be7c-820ec5c605b4/volumes" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.238936 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.316799 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.322136 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-857dbb456d-p5965"] Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.695797 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.755755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-combined-ca-bundle\") pod \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.755863 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data-custom\") pod \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.756000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data\") pod \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.756075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvlxr\" (UniqueName: \"kubernetes.io/projected/354d4cb5-ddbd-4b00-94da-1e52665a46ea-kube-api-access-dvlxr\") pod \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\" (UID: \"354d4cb5-ddbd-4b00-94da-1e52665a46ea\") " Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.761863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354d4cb5-ddbd-4b00-94da-1e52665a46ea-kube-api-access-dvlxr" (OuterVolumeSpecName: "kube-api-access-dvlxr") pod "354d4cb5-ddbd-4b00-94da-1e52665a46ea" (UID: "354d4cb5-ddbd-4b00-94da-1e52665a46ea"). InnerVolumeSpecName "kube-api-access-dvlxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.763231 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "354d4cb5-ddbd-4b00-94da-1e52665a46ea" (UID: "354d4cb5-ddbd-4b00-94da-1e52665a46ea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.788827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "354d4cb5-ddbd-4b00-94da-1e52665a46ea" (UID: "354d4cb5-ddbd-4b00-94da-1e52665a46ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.809188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data" (OuterVolumeSpecName: "config-data") pod "354d4cb5-ddbd-4b00-94da-1e52665a46ea" (UID: "354d4cb5-ddbd-4b00-94da-1e52665a46ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.858867 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.858908 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvlxr\" (UniqueName: \"kubernetes.io/projected/354d4cb5-ddbd-4b00-94da-1e52665a46ea-kube-api-access-dvlxr\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.858920 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:43 crc kubenswrapper[4858]: I1122 09:33:43.858931 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/354d4cb5-ddbd-4b00-94da-1e52665a46ea-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:44 crc kubenswrapper[4858]: I1122 09:33:44.020969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-857dbb456d-p5965" event={"ID":"354d4cb5-ddbd-4b00-94da-1e52665a46ea","Type":"ContainerDied","Data":"7c16a5b985cf3719809cf1125e0432c3e0e2301c319c39ccfc5e8c79599ebe47"} Nov 22 09:33:44 crc kubenswrapper[4858]: I1122 09:33:44.021562 4858 scope.go:117] "RemoveContainer" containerID="ab4ced21ed444c6302bc7bebb7df0f0bda2235a6e3d67ab0c366f1887f12f80c" Nov 22 09:33:44 crc kubenswrapper[4858]: I1122 09:33:44.021084 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-857dbb456d-p5965" Nov 22 09:33:44 crc kubenswrapper[4858]: I1122 09:33:44.072207 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-857dbb456d-p5965"] Nov 22 09:33:44 crc kubenswrapper[4858]: I1122 09:33:44.089882 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-857dbb456d-p5965"] Nov 22 09:33:45 crc kubenswrapper[4858]: I1122 09:33:45.311924 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:33:45 crc kubenswrapper[4858]: I1122 09:33:45.312211 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:33:45 crc kubenswrapper[4858]: I1122 09:33:45.570046 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" path="/var/lib/kubelet/pods/354d4cb5-ddbd-4b00-94da-1e52665a46ea/volumes" Nov 22 09:33:47 crc kubenswrapper[4858]: E1122 09:33:47.527679 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice\": RecentStats: unable to find data in memory cache]" Nov 22 09:33:50 crc kubenswrapper[4858]: I1122 09:33:50.370460 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:33:50 crc kubenswrapper[4858]: I1122 09:33:50.437956 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-85f56dff4b-vjgls"] Nov 22 09:33:50 crc kubenswrapper[4858]: I1122 09:33:50.438203 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-85f56dff4b-vjgls" podUID="29996bc1-83cc-4148-a49a-80fb702c15d8" containerName="heat-engine" containerID="cri-o://4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" gracePeriod=60 Nov 22 09:33:53 crc kubenswrapper[4858]: E1122 09:33:53.268893 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:33:53 crc kubenswrapper[4858]: E1122 09:33:53.270906 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:33:53 crc kubenswrapper[4858]: E1122 09:33:53.272800 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:33:53 crc kubenswrapper[4858]: E1122 09:33:53.272945 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-85f56dff4b-vjgls" podUID="29996bc1-83cc-4148-a49a-80fb702c15d8" containerName="heat-engine" Nov 22 09:33:57 crc kubenswrapper[4858]: E1122 09:33:57.760823 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac\": RecentStats: unable to find data in memory cache]" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.133634 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.261356 4858 generic.go:334] "Generic (PLEG): container finished" podID="29996bc1-83cc-4148-a49a-80fb702c15d8" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" exitCode=0 Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.261653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85f56dff4b-vjgls" event={"ID":"29996bc1-83cc-4148-a49a-80fb702c15d8","Type":"ContainerDied","Data":"4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd"} Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.261678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85f56dff4b-vjgls" event={"ID":"29996bc1-83cc-4148-a49a-80fb702c15d8","Type":"ContainerDied","Data":"c1e1c530941aae0cff0a7b13b4be1586c5201bc2fcf1de0ee22075923c032242"} Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.261694 4858 scope.go:117] "RemoveContainer" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.261799 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85f56dff4b-vjgls" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.285308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wpqs\" (UniqueName: \"kubernetes.io/projected/29996bc1-83cc-4148-a49a-80fb702c15d8-kube-api-access-4wpqs\") pod \"29996bc1-83cc-4148-a49a-80fb702c15d8\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.285433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-combined-ca-bundle\") pod \"29996bc1-83cc-4148-a49a-80fb702c15d8\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.285455 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data-custom\") pod \"29996bc1-83cc-4148-a49a-80fb702c15d8\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.285557 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data\") pod \"29996bc1-83cc-4148-a49a-80fb702c15d8\" (UID: \"29996bc1-83cc-4148-a49a-80fb702c15d8\") " Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.289895 4858 scope.go:117] "RemoveContainer" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" Nov 22 09:34:03 crc kubenswrapper[4858]: E1122 09:34:03.290427 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd\": container with ID starting with 4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd not found: ID does not exist" containerID="4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.290483 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd"} err="failed to get container status \"4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd\": rpc error: code = NotFound desc = could not find container \"4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd\": container with ID starting with 4ad2830d759e1e75e53b4e51f0e4e01bb49f7e410b15b448c5c6a178eedef0dd not found: ID does not exist" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.291688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29996bc1-83cc-4148-a49a-80fb702c15d8-kube-api-access-4wpqs" (OuterVolumeSpecName: "kube-api-access-4wpqs") pod "29996bc1-83cc-4148-a49a-80fb702c15d8" (UID: "29996bc1-83cc-4148-a49a-80fb702c15d8"). InnerVolumeSpecName "kube-api-access-4wpqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.293204 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29996bc1-83cc-4148-a49a-80fb702c15d8" (UID: "29996bc1-83cc-4148-a49a-80fb702c15d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.318045 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29996bc1-83cc-4148-a49a-80fb702c15d8" (UID: "29996bc1-83cc-4148-a49a-80fb702c15d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.357242 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data" (OuterVolumeSpecName: "config-data") pod "29996bc1-83cc-4148-a49a-80fb702c15d8" (UID: "29996bc1-83cc-4148-a49a-80fb702c15d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.387635 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wpqs\" (UniqueName: \"kubernetes.io/projected/29996bc1-83cc-4148-a49a-80fb702c15d8-kube-api-access-4wpqs\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.387675 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.387688 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.387700 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29996bc1-83cc-4148-a49a-80fb702c15d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.619964 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-85f56dff4b-vjgls"] Nov 22 09:34:03 crc kubenswrapper[4858]: I1122 09:34:03.636004 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-85f56dff4b-vjgls"] Nov 22 09:34:05 crc kubenswrapper[4858]: I1122 09:34:05.548289 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29996bc1-83cc-4148-a49a-80fb702c15d8" path="/var/lib/kubelet/pods/29996bc1-83cc-4148-a49a-80fb702c15d8/volumes" Nov 22 09:34:08 crc kubenswrapper[4858]: E1122 09:34:08.029298 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac\": RecentStats: unable to find data in memory cache]" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.231777 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4"] Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.232499 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232515 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.232526 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29996bc1-83cc-4148-a49a-80fb702c15d8" containerName="heat-engine" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232533 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="29996bc1-83cc-4148-a49a-80fb702c15d8" containerName="heat-engine" Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.232558 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232566 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.232576 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232583 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.232597 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232604 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.232625 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232632 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232838 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232851 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc3863f-1077-45d2-9943-d321bdfc0b83" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232866 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="403d2731-4f70-4fff-977f-edc2201aaeb0" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232880 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.232894 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="29996bc1-83cc-4148-a49a-80fb702c15d8" containerName="heat-engine" Nov 22 09:34:10 crc kubenswrapper[4858]: E1122 09:34:10.233133 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.233145 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.233405 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="354d4cb5-ddbd-4b00-94da-1e52665a46ea" containerName="heat-api" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.233420 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19af9ca-499c-4038-be7c-820ec5c605b4" containerName="heat-cfnapi" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.234658 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.243173 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4"] Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.282828 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.329864 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c6m7\" (UniqueName: \"kubernetes.io/projected/c666caa5-34af-4507-9521-883be4891c0c-kube-api-access-6c6m7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.330030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.330139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.433200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.433627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c6m7\" (UniqueName: \"kubernetes.io/projected/c666caa5-34af-4507-9521-883be4891c0c-kube-api-access-6c6m7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.433922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.434179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.434701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.468250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c6m7\" (UniqueName: \"kubernetes.io/projected/c666caa5-34af-4507-9521-883be4891c0c-kube-api-access-6c6m7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:10 crc kubenswrapper[4858]: I1122 09:34:10.606272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:11 crc kubenswrapper[4858]: I1122 09:34:11.148538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4"] Nov 22 09:34:11 crc kubenswrapper[4858]: I1122 09:34:11.342251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerStarted","Data":"50a06a313e546ab1dddd27239226da997cfb58d8a359d10e2740e531f8ccb1dc"} Nov 22 09:34:12 crc kubenswrapper[4858]: I1122 09:34:12.355600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerStarted","Data":"24798be8f993a79b3f2b45865cf3d81c1735cec4ed5f64c73d30418669db2ec6"} Nov 22 09:34:13 crc kubenswrapper[4858]: I1122 09:34:13.370265 4858 generic.go:334] "Generic (PLEG): container finished" podID="c666caa5-34af-4507-9521-883be4891c0c" containerID="24798be8f993a79b3f2b45865cf3d81c1735cec4ed5f64c73d30418669db2ec6" exitCode=0 Nov 22 09:34:13 crc kubenswrapper[4858]: I1122 09:34:13.370336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerDied","Data":"24798be8f993a79b3f2b45865cf3d81c1735cec4ed5f64c73d30418669db2ec6"} Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.051602 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ddns8"] Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.063249 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a285-account-create-2pfdm"] Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.072824 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ddns8"] Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.080330 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a285-account-create-2pfdm"] Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.312514 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.312597 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.398657 4858 generic.go:334] "Generic (PLEG): container finished" podID="c666caa5-34af-4507-9521-883be4891c0c" containerID="0f03211181ea1ef3171c2a4f5390e210de790dd7a4353c77a7491e0817f53c5e" exitCode=0 Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.398707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerDied","Data":"0f03211181ea1ef3171c2a4f5390e210de790dd7a4353c77a7491e0817f53c5e"} Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.554267 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71ed938-4f65-4b94-8a56-d6d02a2b985a" path="/var/lib/kubelet/pods/b71ed938-4f65-4b94-8a56-d6d02a2b985a/volumes" Nov 22 09:34:15 crc kubenswrapper[4858]: I1122 09:34:15.555071 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bd1ad2-5162-4665-8b22-899141e7b863" path="/var/lib/kubelet/pods/f1bd1ad2-5162-4665-8b22-899141e7b863/volumes" Nov 22 09:34:16 crc kubenswrapper[4858]: I1122 09:34:16.427555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerStarted","Data":"99ae76946be237d9827b41bc4f59fbca74a81273f05591bb78a58f3d0bb53c12"} Nov 22 09:34:16 crc kubenswrapper[4858]: I1122 09:34:16.452135 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" podStartSLOduration=4.911513355 podStartE2EDuration="6.452112548s" podCreationTimestamp="2025-11-22 09:34:10 +0000 UTC" firstStartedPulling="2025-11-22 09:34:13.372702569 +0000 UTC m=+8615.214125575" lastFinishedPulling="2025-11-22 09:34:14.913301752 +0000 UTC m=+8616.754724768" observedRunningTime="2025-11-22 09:34:16.447870592 +0000 UTC m=+8618.289293698" watchObservedRunningTime="2025-11-22 09:34:16.452112548 +0000 UTC m=+8618.293535564" Nov 22 09:34:17 crc kubenswrapper[4858]: I1122 09:34:17.443586 4858 generic.go:334] "Generic (PLEG): container finished" podID="c666caa5-34af-4507-9521-883be4891c0c" containerID="99ae76946be237d9827b41bc4f59fbca74a81273f05591bb78a58f3d0bb53c12" exitCode=0 Nov 22 09:34:17 crc kubenswrapper[4858]: I1122 09:34:17.443672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerDied","Data":"99ae76946be237d9827b41bc4f59fbca74a81273f05591bb78a58f3d0bb53c12"} Nov 22 09:34:18 crc kubenswrapper[4858]: E1122 09:34:18.305552 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice\": RecentStats: unable to find data in memory cache]" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.787276 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.845010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-util\") pod \"c666caa5-34af-4507-9521-883be4891c0c\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.845192 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c6m7\" (UniqueName: \"kubernetes.io/projected/c666caa5-34af-4507-9521-883be4891c0c-kube-api-access-6c6m7\") pod \"c666caa5-34af-4507-9521-883be4891c0c\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.845239 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-bundle\") pod \"c666caa5-34af-4507-9521-883be4891c0c\" (UID: \"c666caa5-34af-4507-9521-883be4891c0c\") " Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.848254 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-bundle" (OuterVolumeSpecName: "bundle") pod "c666caa5-34af-4507-9521-883be4891c0c" (UID: "c666caa5-34af-4507-9521-883be4891c0c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.853698 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c666caa5-34af-4507-9521-883be4891c0c-kube-api-access-6c6m7" (OuterVolumeSpecName: "kube-api-access-6c6m7") pod "c666caa5-34af-4507-9521-883be4891c0c" (UID: "c666caa5-34af-4507-9521-883be4891c0c"). InnerVolumeSpecName "kube-api-access-6c6m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.862773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-util" (OuterVolumeSpecName: "util") pod "c666caa5-34af-4507-9521-883be4891c0c" (UID: "c666caa5-34af-4507-9521-883be4891c0c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.948466 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-util\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.948532 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c6m7\" (UniqueName: \"kubernetes.io/projected/c666caa5-34af-4507-9521-883be4891c0c-kube-api-access-6c6m7\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:18 crc kubenswrapper[4858]: I1122 09:34:18.948548 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c666caa5-34af-4507-9521-883be4891c0c-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:19 crc kubenswrapper[4858]: I1122 09:34:19.464595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" event={"ID":"c666caa5-34af-4507-9521-883be4891c0c","Type":"ContainerDied","Data":"50a06a313e546ab1dddd27239226da997cfb58d8a359d10e2740e531f8ccb1dc"} Nov 22 09:34:19 crc kubenswrapper[4858]: I1122 09:34:19.464662 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a06a313e546ab1dddd27239226da997cfb58d8a359d10e2740e531f8ccb1dc" Nov 22 09:34:19 crc kubenswrapper[4858]: I1122 09:34:19.464682 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dkxh4" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.014894 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz"] Nov 22 09:34:28 crc kubenswrapper[4858]: E1122 09:34:28.015743 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="extract" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.015755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="extract" Nov 22 09:34:28 crc kubenswrapper[4858]: E1122 09:34:28.015777 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="util" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.015782 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="util" Nov 22 09:34:28 crc kubenswrapper[4858]: E1122 09:34:28.015810 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="pull" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.015815 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="pull" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.015988 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c666caa5-34af-4507-9521-883be4891c0c" containerName="extract" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.016662 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.018438 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.018841 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-nct2m" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.018874 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.033425 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.131813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdrb4\" (UniqueName: \"kubernetes.io/projected/be0e484f-7b42-4784-b097-e6624401da64-kube-api-access-jdrb4\") pod \"obo-prometheus-operator-668cf9dfbb-6t4lz\" (UID: \"be0e484f-7b42-4784-b097-e6624401da64\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.140019 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.155210 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.155410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.156960 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.156974 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.164932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2f6kf" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.166395 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.172005 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.233585 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb43d072-ca96-4312-9ee2-815f70dc32f6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv\" (UID: \"eb43d072-ca96-4312-9ee2-815f70dc32f6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.233699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ac6cf2f-f9d9-4eee-81be-8e29acd8e286-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz\" (UID: \"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.233767 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ac6cf2f-f9d9-4eee-81be-8e29acd8e286-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz\" (UID: \"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.233800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb43d072-ca96-4312-9ee2-815f70dc32f6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv\" (UID: \"eb43d072-ca96-4312-9ee2-815f70dc32f6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.233824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdrb4\" (UniqueName: \"kubernetes.io/projected/be0e484f-7b42-4784-b097-e6624401da64-kube-api-access-jdrb4\") pod \"obo-prometheus-operator-668cf9dfbb-6t4lz\" (UID: \"be0e484f-7b42-4784-b097-e6624401da64\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.260383 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-7zvpb"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.261588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.264497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-sn9tq" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.279038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-7zvpb"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.280844 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdrb4\" (UniqueName: \"kubernetes.io/projected/be0e484f-7b42-4784-b097-e6624401da64-kube-api-access-jdrb4\") pod \"obo-prometheus-operator-668cf9dfbb-6t4lz\" (UID: \"be0e484f-7b42-4784-b097-e6624401da64\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.284717 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.335333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb43d072-ca96-4312-9ee2-815f70dc32f6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv\" (UID: \"eb43d072-ca96-4312-9ee2-815f70dc32f6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.335397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85j2\" (UniqueName: \"kubernetes.io/projected/819ad141-06e5-476c-982e-b2b058403c07-kube-api-access-r85j2\") pod \"observability-operator-d8bb48f5d-7zvpb\" (UID: \"819ad141-06e5-476c-982e-b2b058403c07\") " pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.335439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb43d072-ca96-4312-9ee2-815f70dc32f6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv\" (UID: \"eb43d072-ca96-4312-9ee2-815f70dc32f6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.335485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/819ad141-06e5-476c-982e-b2b058403c07-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-7zvpb\" (UID: \"819ad141-06e5-476c-982e-b2b058403c07\") " pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.335541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ac6cf2f-f9d9-4eee-81be-8e29acd8e286-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz\" (UID: \"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.335613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ac6cf2f-f9d9-4eee-81be-8e29acd8e286-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz\" (UID: \"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.344824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb43d072-ca96-4312-9ee2-815f70dc32f6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv\" (UID: \"eb43d072-ca96-4312-9ee2-815f70dc32f6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.344853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ac6cf2f-f9d9-4eee-81be-8e29acd8e286-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz\" (UID: \"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.344860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ac6cf2f-f9d9-4eee-81be-8e29acd8e286-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz\" (UID: \"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.344930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb43d072-ca96-4312-9ee2-815f70dc32f6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv\" (UID: \"eb43d072-ca96-4312-9ee2-815f70dc32f6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.348798 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.437021 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85j2\" (UniqueName: \"kubernetes.io/projected/819ad141-06e5-476c-982e-b2b058403c07-kube-api-access-r85j2\") pod \"observability-operator-d8bb48f5d-7zvpb\" (UID: \"819ad141-06e5-476c-982e-b2b058403c07\") " pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.437474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/819ad141-06e5-476c-982e-b2b058403c07-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-7zvpb\" (UID: \"819ad141-06e5-476c-982e-b2b058403c07\") " pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.444584 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/819ad141-06e5-476c-982e-b2b058403c07-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-7zvpb\" (UID: \"819ad141-06e5-476c-982e-b2b058403c07\") " pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.461573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85j2\" (UniqueName: \"kubernetes.io/projected/819ad141-06e5-476c-982e-b2b058403c07-kube-api-access-r85j2\") pod \"observability-operator-d8bb48f5d-7zvpb\" (UID: \"819ad141-06e5-476c-982e-b2b058403c07\") " pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.485685 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-l4lnr"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.487775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.487925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.491038 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-g48n9" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.512767 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.518830 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-l4lnr"] Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.538484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f368c1-607c-4d75-93c4-6f66035f169c-openshift-service-ca\") pod \"perses-operator-5446b9c989-l4lnr\" (UID: \"f6f368c1-607c-4d75-93c4-6f66035f169c\") " pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.538544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwvj\" (UniqueName: \"kubernetes.io/projected/f6f368c1-607c-4d75-93c4-6f66035f169c-kube-api-access-7nwvj\") pod \"perses-operator-5446b9c989-l4lnr\" (UID: \"f6f368c1-607c-4d75-93c4-6f66035f169c\") " pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.631575 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.642442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f368c1-607c-4d75-93c4-6f66035f169c-openshift-service-ca\") pod \"perses-operator-5446b9c989-l4lnr\" (UID: \"f6f368c1-607c-4d75-93c4-6f66035f169c\") " pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.642522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nwvj\" (UniqueName: \"kubernetes.io/projected/f6f368c1-607c-4d75-93c4-6f66035f169c-kube-api-access-7nwvj\") pod \"perses-operator-5446b9c989-l4lnr\" (UID: \"f6f368c1-607c-4d75-93c4-6f66035f169c\") " pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.643802 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6f368c1-607c-4d75-93c4-6f66035f169c-openshift-service-ca\") pod \"perses-operator-5446b9c989-l4lnr\" (UID: \"f6f368c1-607c-4d75-93c4-6f66035f169c\") " pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: E1122 09:34:28.649879 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac\": RecentStats: unable to find data in memory cache]" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.661129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nwvj\" (UniqueName: \"kubernetes.io/projected/f6f368c1-607c-4d75-93c4-6f66035f169c-kube-api-access-7nwvj\") pod \"perses-operator-5446b9c989-l4lnr\" (UID: \"f6f368c1-607c-4d75-93c4-6f66035f169c\") " pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:28 crc kubenswrapper[4858]: I1122 09:34:28.830766 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.397738 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz"] Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.407614 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv"] Nov 22 09:34:29 crc kubenswrapper[4858]: W1122 09:34:29.419560 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0e484f_7b42_4784_b097_e6624401da64.slice/crio-06a65b3b778df9f559d2e11f5ca7fde21bb1eee51a5197f5c428bda235d3a729 WatchSource:0}: Error finding container 06a65b3b778df9f559d2e11f5ca7fde21bb1eee51a5197f5c428bda235d3a729: Status 404 returned error can't find the container with id 06a65b3b778df9f559d2e11f5ca7fde21bb1eee51a5197f5c428bda235d3a729 Nov 22 09:34:29 crc kubenswrapper[4858]: W1122 09:34:29.422137 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ac6cf2f_f9d9_4eee_81be_8e29acd8e286.slice/crio-2ddec92f2cc7fec0b4e73ca5cadb708dbdf382d5daae5a54bf84452f362cd826 WatchSource:0}: Error finding container 2ddec92f2cc7fec0b4e73ca5cadb708dbdf382d5daae5a54bf84452f362cd826: Status 404 returned error can't find the container with id 2ddec92f2cc7fec0b4e73ca5cadb708dbdf382d5daae5a54bf84452f362cd826 Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.426841 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz"] Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.563616 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-7zvpb"] Nov 22 09:34:29 crc kubenswrapper[4858]: W1122 09:34:29.566585 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod819ad141_06e5_476c_982e_b2b058403c07.slice/crio-ad96e9535418031bb2581d023c66d774ee7091352adae6c41beb8e51477a73a2 WatchSource:0}: Error finding container ad96e9535418031bb2581d023c66d774ee7091352adae6c41beb8e51477a73a2: Status 404 returned error can't find the container with id ad96e9535418031bb2581d023c66d774ee7091352adae6c41beb8e51477a73a2 Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.569108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" event={"ID":"be0e484f-7b42-4784-b097-e6624401da64","Type":"ContainerStarted","Data":"06a65b3b778df9f559d2e11f5ca7fde21bb1eee51a5197f5c428bda235d3a729"} Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.574357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" event={"ID":"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286","Type":"ContainerStarted","Data":"2ddec92f2cc7fec0b4e73ca5cadb708dbdf382d5daae5a54bf84452f362cd826"} Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.575585 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" event={"ID":"eb43d072-ca96-4312-9ee2-815f70dc32f6","Type":"ContainerStarted","Data":"0f31768a285920a6328deb12d76c73c7caf250953d223b078c69cf81d2784329"} Nov 22 09:34:29 crc kubenswrapper[4858]: I1122 09:34:29.691353 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-l4lnr"] Nov 22 09:34:30 crc kubenswrapper[4858]: I1122 09:34:30.588912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" event={"ID":"f6f368c1-607c-4d75-93c4-6f66035f169c","Type":"ContainerStarted","Data":"db323bca58101ae46d03168c3fd95de35a35143b6435dace0897ecef416c325d"} Nov 22 09:34:30 crc kubenswrapper[4858]: I1122 09:34:30.590189 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" event={"ID":"819ad141-06e5-476c-982e-b2b058403c07","Type":"ContainerStarted","Data":"ad96e9535418031bb2581d023c66d774ee7091352adae6c41beb8e51477a73a2"} Nov 22 09:34:38 crc kubenswrapper[4858]: I1122 09:34:38.278123 4858 scope.go:117] "RemoveContainer" containerID="60d399c736bb2977373cf9f4b26babac25980176be9375e8175f9d98f4168467" Nov 22 09:34:38 crc kubenswrapper[4858]: E1122 09:34:38.926675 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19af9ca_499c_4038_be7c_820ec5c605b4.slice/crio-3f8cc1efa47abc9088ac604ff47989f4912843b13decf24187e1fa473c9c79ac\": RecentStats: unable to find data in memory cache]" Nov 22 09:34:39 crc kubenswrapper[4858]: I1122 09:34:39.578606 4858 scope.go:117] "RemoveContainer" containerID="a76bbca941c73db5ca0e85403eaaf4dff4c2af41c5efac0bd7e7594f50fed4e9" Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.753996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" event={"ID":"eb43d072-ca96-4312-9ee2-815f70dc32f6","Type":"ContainerStarted","Data":"9ddb76445eba87ab39e3ddf8f6dcf2cf8d0bfb63478da4801b3d1fd9f67ca9b3"} Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.757749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" event={"ID":"be0e484f-7b42-4784-b097-e6624401da64","Type":"ContainerStarted","Data":"3072de53831bcaa9a4f82ccbee6670f555ae30b860a5ece24484e1735c25bae0"} Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.759445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" event={"ID":"f6f368c1-607c-4d75-93c4-6f66035f169c","Type":"ContainerStarted","Data":"b9639a154a55b9fbdfc408c7258a263bd2bec0e77bee15065e4621602446c732"} Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.759570 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.761281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" event={"ID":"2ac6cf2f-f9d9-4eee-81be-8e29acd8e286","Type":"ContainerStarted","Data":"49c95a684c0aa65b0fe28100f9014b7cf0b2245ba7cede62eb8aa0ffea875731"} Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.778054 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-sljnv" podStartSLOduration=2.8396736970000003 podStartE2EDuration="12.778034691s" podCreationTimestamp="2025-11-22 09:34:28 +0000 UTC" firstStartedPulling="2025-11-22 09:34:29.420236952 +0000 UTC m=+8631.261659968" lastFinishedPulling="2025-11-22 09:34:39.358597936 +0000 UTC m=+8641.200020962" observedRunningTime="2025-11-22 09:34:40.776034228 +0000 UTC m=+8642.617457254" watchObservedRunningTime="2025-11-22 09:34:40.778034691 +0000 UTC m=+8642.619457697" Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.801903 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" podStartSLOduration=2.9336491049999998 podStartE2EDuration="12.801883555s" podCreationTimestamp="2025-11-22 09:34:28 +0000 UTC" firstStartedPulling="2025-11-22 09:34:29.711160312 +0000 UTC m=+8631.552583318" lastFinishedPulling="2025-11-22 09:34:39.579394762 +0000 UTC m=+8641.420817768" observedRunningTime="2025-11-22 09:34:40.799326463 +0000 UTC m=+8642.640749469" watchObservedRunningTime="2025-11-22 09:34:40.801883555 +0000 UTC m=+8642.643306561" Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.838170 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59b65dd5f7-4sjcz" podStartSLOduration=2.686535928 podStartE2EDuration="12.838147606s" podCreationTimestamp="2025-11-22 09:34:28 +0000 UTC" firstStartedPulling="2025-11-22 09:34:29.427054931 +0000 UTC m=+8631.268477937" lastFinishedPulling="2025-11-22 09:34:39.578666599 +0000 UTC m=+8641.420089615" observedRunningTime="2025-11-22 09:34:40.822943459 +0000 UTC m=+8642.664366465" watchObservedRunningTime="2025-11-22 09:34:40.838147606 +0000 UTC m=+8642.679570612" Nov 22 09:34:40 crc kubenswrapper[4858]: I1122 09:34:40.863419 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-6t4lz" podStartSLOduration=3.707808359 podStartE2EDuration="13.863396564s" podCreationTimestamp="2025-11-22 09:34:27 +0000 UTC" firstStartedPulling="2025-11-22 09:34:29.423630311 +0000 UTC m=+8631.265053327" lastFinishedPulling="2025-11-22 09:34:39.579218526 +0000 UTC m=+8641.420641532" observedRunningTime="2025-11-22 09:34:40.855313735 +0000 UTC m=+8642.696736741" watchObservedRunningTime="2025-11-22 09:34:40.863396564 +0000 UTC m=+8642.704819570" Nov 22 09:34:42 crc kubenswrapper[4858]: I1122 09:34:42.829835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" event={"ID":"819ad141-06e5-476c-982e-b2b058403c07","Type":"ContainerStarted","Data":"eedc90c381a5f0b51f6b34745ff073e4deaf7d064bf5cfec994930b8c1b010ab"} Nov 22 09:34:42 crc kubenswrapper[4858]: I1122 09:34:42.830820 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:42 crc kubenswrapper[4858]: I1122 09:34:42.833693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" Nov 22 09:34:42 crc kubenswrapper[4858]: I1122 09:34:42.894534 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-7zvpb" podStartSLOduration=2.698123088 podStartE2EDuration="14.894517144s" podCreationTimestamp="2025-11-22 09:34:28 +0000 UTC" firstStartedPulling="2025-11-22 09:34:29.568428325 +0000 UTC m=+8631.409851331" lastFinishedPulling="2025-11-22 09:34:41.764822371 +0000 UTC m=+8643.606245387" observedRunningTime="2025-11-22 09:34:42.861387494 +0000 UTC m=+8644.702810520" watchObservedRunningTime="2025-11-22 09:34:42.894517144 +0000 UTC m=+8644.735940150" Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.040881 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2kc8t"] Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.048569 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2kc8t"] Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.312748 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.313050 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.313092 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.313912 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.313970 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" gracePeriod=600 Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.551149 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a095c1e-c781-4d40-bae8-0012c2c014c3" path="/var/lib/kubelet/pods/2a095c1e-c781-4d40-bae8-0012c2c014c3/volumes" Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.862114 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" exitCode=0 Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.862151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82"} Nov 22 09:34:45 crc kubenswrapper[4858]: I1122 09:34:45.862182 4858 scope.go:117] "RemoveContainer" containerID="01b154540af086555e8de88df2c8cf3032eaed4484d3077288bd94301afb3099" Nov 22 09:34:46 crc kubenswrapper[4858]: E1122 09:34:46.150117 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:34:46 crc kubenswrapper[4858]: I1122 09:34:46.876625 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:34:46 crc kubenswrapper[4858]: E1122 09:34:46.877543 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:34:48 crc kubenswrapper[4858]: I1122 09:34:48.833611 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-l4lnr" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.581682 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.582134 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="219f1525-1b78-413c-a590-76f21b7df852" containerName="openstackclient" containerID="cri-o://d549362dfc90b0a50c5ba9a47f8c3e2a35a35e0f363ecfdba956cba81768d510" gracePeriod=2 Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.597342 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.651800 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: E1122 09:34:51.652180 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219f1525-1b78-413c-a590-76f21b7df852" containerName="openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.652197 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="219f1525-1b78-413c-a590-76f21b7df852" containerName="openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.652401 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="219f1525-1b78-413c-a590-76f21b7df852" containerName="openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.653056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.683619 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.705555 4858 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b16cf4c-dd2c-4eae-845b-70306f104b7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T09:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T09:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T09:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T09:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:87d86758a49b8425a546c66207f21761\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5tsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T09:34:51Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.714935 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: E1122 09:34:51.715739 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-m5tsx openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-m5tsx openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.727627 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.760370 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.762697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.773008 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="219f1525-1b78-413c-a590-76f21b7df852" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.777599 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.787212 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.874306 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.875790 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.880417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-skg6b" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.885303 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.921085 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.928406 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.928824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.934437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.934601 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2clmz\" (UniqueName: \"kubernetes.io/projected/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-kube-api-access-2clmz\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.934632 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.934712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config-secret\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:51 crc kubenswrapper[4858]: I1122 09:34:51.949680 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.044460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2clmz\" (UniqueName: \"kubernetes.io/projected/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-kube-api-access-2clmz\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.044502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.044566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config-secret\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.044597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.044618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5q9\" (UniqueName: \"kubernetes.io/projected/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96-kube-api-access-sm5q9\") pod \"kube-state-metrics-0\" (UID: \"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96\") " pod="openstack/kube-state-metrics-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.050130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.054182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.055815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config-secret\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.092266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2clmz\" (UniqueName: \"kubernetes.io/projected/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-kube-api-access-2clmz\") pod \"openstackclient\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.150525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5q9\" (UniqueName: \"kubernetes.io/projected/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96-kube-api-access-sm5q9\") pod \"kube-state-metrics-0\" (UID: \"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96\") " pod="openstack/kube-state-metrics-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.225602 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5q9\" (UniqueName: \"kubernetes.io/projected/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96-kube-api-access-sm5q9\") pod \"kube-state-metrics-0\" (UID: \"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96\") " pod="openstack/kube-state-metrics-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.387855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.497648 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.670692 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.672890 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.687748 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.688045 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.688158 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.688256 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-pwh65" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.688396 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.702785 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxb6p\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-kube-api-access-dxb6p\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861599 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.861634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.930504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.935060 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.958850 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962727 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962951 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxb6p\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-kube-api-access-dxb6p\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.962978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.969869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.971249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.976460 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.976941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.982934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:52 crc kubenswrapper[4858]: I1122 09:34:52.983843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.014692 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxb6p\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-kube-api-access-dxb6p\") pod \"alertmanager-metric-storage-0\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.183332 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.194771 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.211754 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.213879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.225881 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.225971 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.226028 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.226144 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lzg6c" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.226311 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.226620 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvc9\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-kube-api-access-vpvc9\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dbf8488e-d69b-45d0-a791-299f2aa65aa4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268734 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.268775 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.293378 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.309120 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dbf8488e-d69b-45d0-a791-299f2aa65aa4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371655 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvc9\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-kube-api-access-vpvc9\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.371899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.373164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dbf8488e-d69b-45d0-a791-299f2aa65aa4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.380986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.384308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.392059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.392793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.393003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.424422 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.424465 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/97aee99cbbd289a350f82ba4e6e4e55bbb7cafb4bf9b5d608b07df4d4b84cbd0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.429202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvc9\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-kube-api-access-vpvc9\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.599176 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b16cf4c-dd2c-4eae-845b-70306f104b7e" path="/var/lib/kubelet/pods/5b16cf4c-dd2c-4eae-845b-70306f104b7e/volumes" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.839093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.963143 4858 generic.go:334] "Generic (PLEG): container finished" podID="219f1525-1b78-413c-a590-76f21b7df852" containerID="d549362dfc90b0a50c5ba9a47f8c3e2a35a35e0f363ecfdba956cba81768d510" exitCode=137 Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.966073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"43ac7f7d-78b2-4260-93a7-0dd90c837b9c","Type":"ContainerStarted","Data":"9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af"} Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.966110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"43ac7f7d-78b2-4260-93a7-0dd90c837b9c","Type":"ContainerStarted","Data":"23497d88238db05332235b38584d929024f38157801d0f2b1fa9e2cdb2b18146"} Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.973157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96","Type":"ContainerStarted","Data":"911ef0a9211bfa9ce3e32180986f79064225063dcaaa36aaa65cf7940154643f"} Nov 22 09:34:53 crc kubenswrapper[4858]: I1122 09:34:53.983143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.012139 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.012121867 podStartE2EDuration="3.012121867s" podCreationTimestamp="2025-11-22 09:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:34:54.002186049 +0000 UTC m=+8655.843609065" watchObservedRunningTime="2025-11-22 09:34:54.012121867 +0000 UTC m=+8655.853544873" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.124039 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 09:34:54 crc kubenswrapper[4858]: W1122 09:34:54.148680 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0691c992_818e_46a2_9057_2f9548253076.slice/crio-f86bd66e9d534895a13f2670155400e7635f86c8f93cd54e90ce2396d573ab6a WatchSource:0}: Error finding container f86bd66e9d534895a13f2670155400e7635f86c8f93cd54e90ce2396d573ab6a: Status 404 returned error can't find the container with id f86bd66e9d534895a13f2670155400e7635f86c8f93cd54e90ce2396d573ab6a Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.232394 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.406804 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b78f7\" (UniqueName: \"kubernetes.io/projected/219f1525-1b78-413c-a590-76f21b7df852-kube-api-access-b78f7\") pod \"219f1525-1b78-413c-a590-76f21b7df852\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.407166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-openstack-config-secret\") pod \"219f1525-1b78-413c-a590-76f21b7df852\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.407341 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/219f1525-1b78-413c-a590-76f21b7df852-openstack-config\") pod \"219f1525-1b78-413c-a590-76f21b7df852\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.407430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-combined-ca-bundle\") pod \"219f1525-1b78-413c-a590-76f21b7df852\" (UID: \"219f1525-1b78-413c-a590-76f21b7df852\") " Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.413435 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219f1525-1b78-413c-a590-76f21b7df852-kube-api-access-b78f7" (OuterVolumeSpecName: "kube-api-access-b78f7") pod "219f1525-1b78-413c-a590-76f21b7df852" (UID: "219f1525-1b78-413c-a590-76f21b7df852"). InnerVolumeSpecName "kube-api-access-b78f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.439840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219f1525-1b78-413c-a590-76f21b7df852-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "219f1525-1b78-413c-a590-76f21b7df852" (UID: "219f1525-1b78-413c-a590-76f21b7df852"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.448878 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "219f1525-1b78-413c-a590-76f21b7df852" (UID: "219f1525-1b78-413c-a590-76f21b7df852"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.480792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "219f1525-1b78-413c-a590-76f21b7df852" (UID: "219f1525-1b78-413c-a590-76f21b7df852"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.509437 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b78f7\" (UniqueName: \"kubernetes.io/projected/219f1525-1b78-413c-a590-76f21b7df852-kube-api-access-b78f7\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.509472 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.509482 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/219f1525-1b78-413c-a590-76f21b7df852-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.509490 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219f1525-1b78-413c-a590-76f21b7df852-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.558468 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:34:54 crc kubenswrapper[4858]: W1122 09:34:54.573574 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf8488e_d69b_45d0_a791_299f2aa65aa4.slice/crio-9431dbee56a3e5d96a0c00636b23d92ca0b6e245690001775d0d7f98cdee6c99 WatchSource:0}: Error finding container 9431dbee56a3e5d96a0c00636b23d92ca0b6e245690001775d0d7f98cdee6c99: Status 404 returned error can't find the container with id 9431dbee56a3e5d96a0c00636b23d92ca0b6e245690001775d0d7f98cdee6c99 Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.985605 4858 scope.go:117] "RemoveContainer" containerID="d549362dfc90b0a50c5ba9a47f8c3e2a35a35e0f363ecfdba956cba81768d510" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.985669 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.987951 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerStarted","Data":"f86bd66e9d534895a13f2670155400e7635f86c8f93cd54e90ce2396d573ab6a"} Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.990024 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96","Type":"ContainerStarted","Data":"9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97"} Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.991150 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 09:34:54 crc kubenswrapper[4858]: I1122 09:34:54.998046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerStarted","Data":"9431dbee56a3e5d96a0c00636b23d92ca0b6e245690001775d0d7f98cdee6c99"} Nov 22 09:34:55 crc kubenswrapper[4858]: I1122 09:34:55.016517 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="219f1525-1b78-413c-a590-76f21b7df852" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" Nov 22 09:34:55 crc kubenswrapper[4858]: I1122 09:34:55.550472 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219f1525-1b78-413c-a590-76f21b7df852" path="/var/lib/kubelet/pods/219f1525-1b78-413c-a590-76f21b7df852/volumes" Nov 22 09:34:59 crc kubenswrapper[4858]: I1122 09:34:59.550607 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:34:59 crc kubenswrapper[4858]: E1122 09:34:59.551645 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:34:59 crc kubenswrapper[4858]: I1122 09:34:59.585462 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.10901176 podStartE2EDuration="8.585437097s" podCreationTimestamp="2025-11-22 09:34:51 +0000 UTC" firstStartedPulling="2025-11-22 09:34:53.197179377 +0000 UTC m=+8655.038602383" lastFinishedPulling="2025-11-22 09:34:53.673604724 +0000 UTC m=+8655.515027720" observedRunningTime="2025-11-22 09:34:55.011941494 +0000 UTC m=+8656.853364500" watchObservedRunningTime="2025-11-22 09:34:59.585437097 +0000 UTC m=+8661.426860103" Nov 22 09:35:01 crc kubenswrapper[4858]: I1122 09:35:01.062974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerStarted","Data":"84077731cbf14b4a7a9b6c9a8f86172f3b454069f7e80249ba2ab4d94ebd58fb"} Nov 22 09:35:02 crc kubenswrapper[4858]: I1122 09:35:02.073616 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerStarted","Data":"fa71adaecd0a9e035bee0fcd93bdc9536313df78216677aca88097078173e487"} Nov 22 09:35:02 crc kubenswrapper[4858]: I1122 09:35:02.504692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 09:35:09 crc kubenswrapper[4858]: I1122 09:35:09.156166 4858 generic.go:334] "Generic (PLEG): container finished" podID="0691c992-818e-46a2-9057-2f9548253076" containerID="84077731cbf14b4a7a9b6c9a8f86172f3b454069f7e80249ba2ab4d94ebd58fb" exitCode=0 Nov 22 09:35:09 crc kubenswrapper[4858]: I1122 09:35:09.156273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerDied","Data":"84077731cbf14b4a7a9b6c9a8f86172f3b454069f7e80249ba2ab4d94ebd58fb"} Nov 22 09:35:09 crc kubenswrapper[4858]: I1122 09:35:09.164478 4858 generic.go:334] "Generic (PLEG): container finished" podID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerID="fa71adaecd0a9e035bee0fcd93bdc9536313df78216677aca88097078173e487" exitCode=0 Nov 22 09:35:09 crc kubenswrapper[4858]: I1122 09:35:09.164523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerDied","Data":"fa71adaecd0a9e035bee0fcd93bdc9536313df78216677aca88097078173e487"} Nov 22 09:35:12 crc kubenswrapper[4858]: I1122 09:35:12.201894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerStarted","Data":"918209d7d13a78e17d2265b8b6e9586b5d6360719a05e32d9d26a420c7ab48d1"} Nov 22 09:35:13 crc kubenswrapper[4858]: I1122 09:35:13.536063 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:35:13 crc kubenswrapper[4858]: E1122 09:35:13.536850 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:35:16 crc kubenswrapper[4858]: I1122 09:35:16.259755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerStarted","Data":"4f38792396e3d0b3fe3482c717f089c4843b54559f52cb8be1e2ed5bed2a403e"} Nov 22 09:35:16 crc kubenswrapper[4858]: I1122 09:35:16.309249 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.758775562 podStartE2EDuration="24.309215481s" podCreationTimestamp="2025-11-22 09:34:52 +0000 UTC" firstStartedPulling="2025-11-22 09:34:54.158563604 +0000 UTC m=+8655.999986600" lastFinishedPulling="2025-11-22 09:35:11.709003463 +0000 UTC m=+8673.550426519" observedRunningTime="2025-11-22 09:35:16.284209261 +0000 UTC m=+8678.125632367" watchObservedRunningTime="2025-11-22 09:35:16.309215481 +0000 UTC m=+8678.150638557" Nov 22 09:35:17 crc kubenswrapper[4858]: I1122 09:35:17.275276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerStarted","Data":"3cfaa3a912666a5763df623a8150b5f14b4dc8b242e5b7e76d5c15ddb0212691"} Nov 22 09:35:17 crc kubenswrapper[4858]: I1122 09:35:17.275956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 22 09:35:17 crc kubenswrapper[4858]: I1122 09:35:17.280035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 22 09:35:21 crc kubenswrapper[4858]: I1122 09:35:21.326748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerStarted","Data":"3bca54fb5b3a858af7df2e9be8235352def5bba9fcb9513cd501391123bd777a"} Nov 22 09:35:24 crc kubenswrapper[4858]: I1122 09:35:24.362116 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerStarted","Data":"1fef5450ce31c9a7cfac73cebe3276d73ac86c25c59370ad8243b12907b7a389"} Nov 22 09:35:24 crc kubenswrapper[4858]: I1122 09:35:24.407692 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.781594175 podStartE2EDuration="32.407673304s" podCreationTimestamp="2025-11-22 09:34:52 +0000 UTC" firstStartedPulling="2025-11-22 09:34:54.576716525 +0000 UTC m=+8656.418139531" lastFinishedPulling="2025-11-22 09:35:23.202795654 +0000 UTC m=+8685.044218660" observedRunningTime="2025-11-22 09:35:24.400092621 +0000 UTC m=+8686.241515647" watchObservedRunningTime="2025-11-22 09:35:24.407673304 +0000 UTC m=+8686.249096310" Nov 22 09:35:27 crc kubenswrapper[4858]: I1122 09:35:27.536284 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:35:27 crc kubenswrapper[4858]: E1122 09:35:27.537609 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:35:28 crc kubenswrapper[4858]: I1122 09:35:28.984222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.391960 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.395409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.402043 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.402249 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.432823 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-scripts\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-log-httpd\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-config-data\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8j8w\" (UniqueName: \"kubernetes.io/projected/043bb005-598c-40a0-8519-54f08e426c13-kube-api-access-k8j8w\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.532636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-run-httpd\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-scripts\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-log-httpd\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-config-data\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8j8w\" (UniqueName: \"kubernetes.io/projected/043bb005-598c-40a0-8519-54f08e426c13-kube-api-access-k8j8w\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.634530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-run-httpd\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.635079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-run-httpd\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.636059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-log-httpd\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.641116 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-config-data\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.642231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.642702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.643777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-scripts\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.653660 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8j8w\" (UniqueName: \"kubernetes.io/projected/043bb005-598c-40a0-8519-54f08e426c13-kube-api-access-k8j8w\") pod \"ceilometer-0\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " pod="openstack/ceilometer-0" Nov 22 09:35:34 crc kubenswrapper[4858]: I1122 09:35:34.731171 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:35:35 crc kubenswrapper[4858]: I1122 09:35:35.203882 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:35:35 crc kubenswrapper[4858]: I1122 09:35:35.497831 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerStarted","Data":"eb3cf85dfbda8dbf92a3c274af720ab98a0f0daf83dcda1780181677f7c27306"} Nov 22 09:35:38 crc kubenswrapper[4858]: I1122 09:35:38.983565 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:38 crc kubenswrapper[4858]: I1122 09:35:38.985905 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:39 crc kubenswrapper[4858]: I1122 09:35:39.559869 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:39 crc kubenswrapper[4858]: I1122 09:35:39.825297 4858 scope.go:117] "RemoveContainer" containerID="42dda4d7c013497225a600830b444227d481bbe03d6918515af62151977ed172" Nov 22 09:35:39 crc kubenswrapper[4858]: I1122 09:35:39.870378 4858 scope.go:117] "RemoveContainer" containerID="2815c5b131e046c9fe0fd6995ed7c565c3961288f63bf880f187637ffc7eb0c1" Nov 22 09:35:39 crc kubenswrapper[4858]: I1122 09:35:39.931208 4858 scope.go:117] "RemoveContainer" containerID="a64434333a390bb571799f987682e4b16aecc0d7ccdb263e229d6a02273f9251" Nov 22 09:35:39 crc kubenswrapper[4858]: I1122 09:35:39.957985 4858 scope.go:117] "RemoveContainer" containerID="b0626edc2769d88f87bfaf0d74800221c0acf3e64782837700281b9f040ec63d" Nov 22 09:35:40 crc kubenswrapper[4858]: I1122 09:35:40.548707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerStarted","Data":"cb59e9eb819d4ee5cda01a43d315223b4602645119f72fdc8a24888a01bdfb81"} Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.017418 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.018119 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" containerName="openstackclient" containerID="cri-o://9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af" gracePeriod=2 Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.076762 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.138389 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 09:35:41 crc kubenswrapper[4858]: E1122 09:35:41.139453 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" containerName="openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.139486 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" containerName="openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.139948 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" containerName="openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.155701 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.155893 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.178032 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" podUID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.284571 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4t8s\" (UniqueName: \"kubernetes.io/projected/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-kube-api-access-w4t8s\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.284793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.284839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.285030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.386899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.388123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.388307 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.388625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4t8s\" (UniqueName: \"kubernetes.io/projected/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-kube-api-access-w4t8s\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.389240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.392089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config-secret\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.392236 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.415506 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4t8s\" (UniqueName: \"kubernetes.io/projected/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-kube-api-access-w4t8s\") pod \"openstackclient\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.486766 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:35:41 crc kubenswrapper[4858]: I1122 09:35:41.570083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerStarted","Data":"a3f1788a26f64a5f11db8f702ac6c51bf557105ff0ed9904fe2452c139073ad1"} Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.109776 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.536581 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:35:42 crc kubenswrapper[4858]: E1122 09:35:42.536812 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.584007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ced90ddf-eae9-45e2-ae0a-9306ed9873d7","Type":"ContainerStarted","Data":"89f5cb55d37c396ec6fc110b271605257fc3966e0a587600d0b34d9feee774c6"} Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.584050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ced90ddf-eae9-45e2-ae0a-9306ed9873d7","Type":"ContainerStarted","Data":"9c7ac68a37778229c614a76c20d61f6a39f4b768ecc795fc0cb6679058a8da0a"} Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.588672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerStarted","Data":"f1676e8037e7dd2174352a488d5cc89f6d0b0e9e91773be6d80e7335a9f7469f"} Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.614138 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.614115207 podStartE2EDuration="1.614115207s" podCreationTimestamp="2025-11-22 09:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:35:42.606160892 +0000 UTC m=+8704.447583908" watchObservedRunningTime="2025-11-22 09:35:42.614115207 +0000 UTC m=+8704.455538213" Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.790530 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.790846 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="prometheus" containerID="cri-o://3cfaa3a912666a5763df623a8150b5f14b4dc8b242e5b7e76d5c15ddb0212691" gracePeriod=600 Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.790952 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="thanos-sidecar" containerID="cri-o://1fef5450ce31c9a7cfac73cebe3276d73ac86c25c59370ad8243b12907b7a389" gracePeriod=600 Nov 22 09:35:42 crc kubenswrapper[4858]: I1122 09:35:42.790981 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="config-reloader" containerID="cri-o://3bca54fb5b3a858af7df2e9be8235352def5bba9fcb9513cd501391123bd777a" gracePeriod=600 Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.337278 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.449265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config-secret\") pod \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.449640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-combined-ca-bundle\") pod \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.449724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2clmz\" (UniqueName: \"kubernetes.io/projected/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-kube-api-access-2clmz\") pod \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.449803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config\") pod \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\" (UID: \"43ac7f7d-78b2-4260-93a7-0dd90c837b9c\") " Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.453939 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-kube-api-access-2clmz" (OuterVolumeSpecName: "kube-api-access-2clmz") pod "43ac7f7d-78b2-4260-93a7-0dd90c837b9c" (UID: "43ac7f7d-78b2-4260-93a7-0dd90c837b9c"). InnerVolumeSpecName "kube-api-access-2clmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.476832 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "43ac7f7d-78b2-4260-93a7-0dd90c837b9c" (UID: "43ac7f7d-78b2-4260-93a7-0dd90c837b9c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.491973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43ac7f7d-78b2-4260-93a7-0dd90c837b9c" (UID: "43ac7f7d-78b2-4260-93a7-0dd90c837b9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.514664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "43ac7f7d-78b2-4260-93a7-0dd90c837b9c" (UID: "43ac7f7d-78b2-4260-93a7-0dd90c837b9c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.549252 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" path="/var/lib/kubelet/pods/43ac7f7d-78b2-4260-93a7-0dd90c837b9c/volumes" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.552478 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.552502 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2clmz\" (UniqueName: \"kubernetes.io/projected/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-kube-api-access-2clmz\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.552513 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.552523 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43ac7f7d-78b2-4260-93a7-0dd90c837b9c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.600328 4858 generic.go:334] "Generic (PLEG): container finished" podID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerID="1fef5450ce31c9a7cfac73cebe3276d73ac86c25c59370ad8243b12907b7a389" exitCode=0 Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.600361 4858 generic.go:334] "Generic (PLEG): container finished" podID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerID="3bca54fb5b3a858af7df2e9be8235352def5bba9fcb9513cd501391123bd777a" exitCode=0 Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.600369 4858 generic.go:334] "Generic (PLEG): container finished" podID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerID="3cfaa3a912666a5763df623a8150b5f14b4dc8b242e5b7e76d5c15ddb0212691" exitCode=0 Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.600408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerDied","Data":"1fef5450ce31c9a7cfac73cebe3276d73ac86c25c59370ad8243b12907b7a389"} Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.600435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerDied","Data":"3bca54fb5b3a858af7df2e9be8235352def5bba9fcb9513cd501391123bd777a"} Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.600445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerDied","Data":"3cfaa3a912666a5763df623a8150b5f14b4dc8b242e5b7e76d5c15ddb0212691"} Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.602489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerStarted","Data":"0539743122e295683803747a5ffd6188d343eb46dbf55fd94ecbff7626b3a961"} Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.603773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.605284 4858 generic.go:334] "Generic (PLEG): container finished" podID="43ac7f7d-78b2-4260-93a7-0dd90c837b9c" containerID="9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af" exitCode=137 Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.605384 4858 scope.go:117] "RemoveContainer" containerID="9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.605459 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.634588 4858 scope.go:117] "RemoveContainer" containerID="9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af" Nov 22 09:35:43 crc kubenswrapper[4858]: E1122 09:35:43.638734 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af\": container with ID starting with 9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af not found: ID does not exist" containerID="9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.638785 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af"} err="failed to get container status \"9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af\": rpc error: code = NotFound desc = could not find container \"9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af\": container with ID starting with 9c6b38d26676711bb20181cb7d140cb8e875d836eb6659902144ef946099e2af not found: ID does not exist" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.640171 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.759836603 podStartE2EDuration="9.640156163s" podCreationTimestamp="2025-11-22 09:35:34 +0000 UTC" firstStartedPulling="2025-11-22 09:35:35.22166718 +0000 UTC m=+8697.063090186" lastFinishedPulling="2025-11-22 09:35:43.10198674 +0000 UTC m=+8704.943409746" observedRunningTime="2025-11-22 09:35:43.627916921 +0000 UTC m=+8705.469339947" watchObservedRunningTime="2025-11-22 09:35:43.640156163 +0000 UTC m=+8705.481579189" Nov 22 09:35:43 crc kubenswrapper[4858]: I1122 09:35:43.984064 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.145:9090/-/ready\": dial tcp 10.217.1.145:9090: connect: connection refused" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.430982 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.577003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-thanos-prometheus-http-client-file\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.577426 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dbf8488e-d69b-45d0-a791-299f2aa65aa4-prometheus-metric-storage-rulefiles-0\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.577529 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-tls-assets\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.577569 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.577686 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-web-config\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.578111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.578184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpvc9\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-kube-api-access-vpvc9\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.578231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config-out\") pod \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\" (UID: \"dbf8488e-d69b-45d0-a791-299f2aa65aa4\") " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.578595 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf8488e-d69b-45d0-a791-299f2aa65aa4-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.578985 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dbf8488e-d69b-45d0-a791-299f2aa65aa4-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.585378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.585785 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.585970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config-out" (OuterVolumeSpecName: "config-out") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.596604 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-kube-api-access-vpvc9" (OuterVolumeSpecName: "kube-api-access-vpvc9") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "kube-api-access-vpvc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.627700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dbf8488e-d69b-45d0-a791-299f2aa65aa4","Type":"ContainerDied","Data":"9431dbee56a3e5d96a0c00636b23d92ca0b6e245690001775d0d7f98cdee6c99"} Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.627754 4858 scope.go:117] "RemoveContainer" containerID="1fef5450ce31c9a7cfac73cebe3276d73ac86c25c59370ad8243b12907b7a389" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.627870 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.635579 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config" (OuterVolumeSpecName: "config") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.639555 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.642162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-web-config" (OuterVolumeSpecName: "web-config") pod "dbf8488e-d69b-45d0-a791-299f2aa65aa4" (UID: "dbf8488e-d69b-45d0-a791-299f2aa65aa4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680844 4858 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680881 4858 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680895 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680905 4858 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dbf8488e-d69b-45d0-a791-299f2aa65aa4-web-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680932 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") on node \"crc\" " Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680945 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpvc9\" (UniqueName: \"kubernetes.io/projected/dbf8488e-d69b-45d0-a791-299f2aa65aa4-kube-api-access-vpvc9\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.680955 4858 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dbf8488e-d69b-45d0-a791-299f2aa65aa4-config-out\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.717078 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.717639 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b") on node "crc" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.718204 4858 scope.go:117] "RemoveContainer" containerID="3bca54fb5b3a858af7df2e9be8235352def5bba9fcb9513cd501391123bd777a" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.737372 4858 scope.go:117] "RemoveContainer" containerID="3cfaa3a912666a5763df623a8150b5f14b4dc8b242e5b7e76d5c15ddb0212691" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.758837 4858 scope.go:117] "RemoveContainer" containerID="fa71adaecd0a9e035bee0fcd93bdc9536313df78216677aca88097078173e487" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.783886 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:44 crc kubenswrapper[4858]: I1122 09:35:44.989758 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.014491 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031078 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:35:45 crc kubenswrapper[4858]: E1122 09:35:45.031601 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="config-reloader" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031625 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="config-reloader" Nov 22 09:35:45 crc kubenswrapper[4858]: E1122 09:35:45.031663 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="init-config-reloader" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="init-config-reloader" Nov 22 09:35:45 crc kubenswrapper[4858]: E1122 09:35:45.031686 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="prometheus" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031694 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="prometheus" Nov 22 09:35:45 crc kubenswrapper[4858]: E1122 09:35:45.031711 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="thanos-sidecar" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031718 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="thanos-sidecar" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031950 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="config-reloader" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031977 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="prometheus" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.031993 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" containerName="thanos-sidecar" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.034247 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.036704 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.042207 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.042965 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.043149 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.043437 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-lzg6c" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.043748 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.046595 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.046751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.194374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.194448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-config\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.194592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c20945ad-d582-4bb8-a485-c6dbb78207fe-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.194642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196278 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzjqf\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-kube-api-access-tzjqf\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c20945ad-d582-4bb8-a485-c6dbb78207fe-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196695 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196748 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.196832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298653 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-config\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c20945ad-d582-4bb8-a485-c6dbb78207fe-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298822 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298847 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzjqf\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-kube-api-access-tzjqf\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.298907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c20945ad-d582-4bb8-a485-c6dbb78207fe-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.299609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c20945ad-d582-4bb8-a485-c6dbb78207fe-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.302748 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.302797 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/97aee99cbbd289a350f82ba4e6e4e55bbb7cafb4bf9b5d608b07df4d4b84cbd0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.305355 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.305792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.305970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.308541 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c20945ad-d582-4bb8-a485-c6dbb78207fe-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.308738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.310391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.310490 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-config\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.312352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.319735 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzjqf\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-kube-api-access-tzjqf\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.368876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"prometheus-metric-storage-0\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.553618 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf8488e-d69b-45d0-a791-299f2aa65aa4" path="/var/lib/kubelet/pods/dbf8488e-d69b-45d0-a791-299f2aa65aa4/volumes" Nov 22 09:35:45 crc kubenswrapper[4858]: I1122 09:35:45.657000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:35:46 crc kubenswrapper[4858]: W1122 09:35:46.165619 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20945ad_d582_4bb8_a485_c6dbb78207fe.slice/crio-66f8cbe4ac11aa70bbf43ca1dcd36213291997b518de494ef5d8337c78c76cd7 WatchSource:0}: Error finding container 66f8cbe4ac11aa70bbf43ca1dcd36213291997b518de494ef5d8337c78c76cd7: Status 404 returned error can't find the container with id 66f8cbe4ac11aa70bbf43ca1dcd36213291997b518de494ef5d8337c78c76cd7 Nov 22 09:35:46 crc kubenswrapper[4858]: I1122 09:35:46.180137 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:35:46 crc kubenswrapper[4858]: I1122 09:35:46.661267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerStarted","Data":"66f8cbe4ac11aa70bbf43ca1dcd36213291997b518de494ef5d8337c78c76cd7"} Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.122709 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-tfssl"] Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.125017 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.135404 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2e22-account-create-qkb8w"] Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.137098 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.139467 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.142507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-tfssl"] Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.148050 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2e22-account-create-qkb8w"] Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.187805 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htptz\" (UniqueName: \"kubernetes.io/projected/c146df95-ce54-4862-86de-1f1612502264-kube-api-access-htptz\") pod \"aodh-db-create-tfssl\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.187918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c146df95-ce54-4862-86de-1f1612502264-operator-scripts\") pod \"aodh-db-create-tfssl\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.290142 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbh9\" (UniqueName: \"kubernetes.io/projected/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-kube-api-access-psbh9\") pod \"aodh-2e22-account-create-qkb8w\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.290219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-operator-scripts\") pod \"aodh-2e22-account-create-qkb8w\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.290283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c146df95-ce54-4862-86de-1f1612502264-operator-scripts\") pod \"aodh-db-create-tfssl\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.290445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htptz\" (UniqueName: \"kubernetes.io/projected/c146df95-ce54-4862-86de-1f1612502264-kube-api-access-htptz\") pod \"aodh-db-create-tfssl\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.291649 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c146df95-ce54-4862-86de-1f1612502264-operator-scripts\") pod \"aodh-db-create-tfssl\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.312181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htptz\" (UniqueName: \"kubernetes.io/projected/c146df95-ce54-4862-86de-1f1612502264-kube-api-access-htptz\") pod \"aodh-db-create-tfssl\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.393519 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psbh9\" (UniqueName: \"kubernetes.io/projected/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-kube-api-access-psbh9\") pod \"aodh-2e22-account-create-qkb8w\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.393584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-operator-scripts\") pod \"aodh-2e22-account-create-qkb8w\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.394605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-operator-scripts\") pod \"aodh-2e22-account-create-qkb8w\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.410146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psbh9\" (UniqueName: \"kubernetes.io/projected/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-kube-api-access-psbh9\") pod \"aodh-2e22-account-create-qkb8w\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.450795 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:49 crc kubenswrapper[4858]: I1122 09:35:49.467630 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.117022 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2e22-account-create-qkb8w"] Nov 22 09:35:50 crc kubenswrapper[4858]: W1122 09:35:50.117231 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214a7d5c_aa83_4ebc_b7dc_942ecdfdb759.slice/crio-e2caba1ed96319596d82ce1c23b10672e27344cb377903e5779c5f90624bbbb2 WatchSource:0}: Error finding container e2caba1ed96319596d82ce1c23b10672e27344cb377903e5779c5f90624bbbb2: Status 404 returned error can't find the container with id e2caba1ed96319596d82ce1c23b10672e27344cb377903e5779c5f90624bbbb2 Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.193685 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-tfssl"] Nov 22 09:35:50 crc kubenswrapper[4858]: W1122 09:35:50.207032 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc146df95_ce54_4862_86de_1f1612502264.slice/crio-010e5479968f91a77a65cb0d97d9d449967190f6799cc8e2b217f5720487c7e0 WatchSource:0}: Error finding container 010e5479968f91a77a65cb0d97d9d449967190f6799cc8e2b217f5720487c7e0: Status 404 returned error can't find the container with id 010e5479968f91a77a65cb0d97d9d449967190f6799cc8e2b217f5720487c7e0 Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.719404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-tfssl" event={"ID":"c146df95-ce54-4862-86de-1f1612502264","Type":"ContainerStarted","Data":"03f0619fca527648354a137f8ed45941fe3b9e6ed1682842bd4cb2b6eb5ae9f6"} Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.719459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-tfssl" event={"ID":"c146df95-ce54-4862-86de-1f1612502264","Type":"ContainerStarted","Data":"010e5479968f91a77a65cb0d97d9d449967190f6799cc8e2b217f5720487c7e0"} Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.722331 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerStarted","Data":"6710e42193427ea5e698492be80b243408ec95b75ef41570bcefa42cabb6bd45"} Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.724125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2e22-account-create-qkb8w" event={"ID":"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759","Type":"ContainerStarted","Data":"521ad7f882964279edf7d5590bacbc8eeace3ac39a83cda6a10df740dc827350"} Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.724159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2e22-account-create-qkb8w" event={"ID":"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759","Type":"ContainerStarted","Data":"e2caba1ed96319596d82ce1c23b10672e27344cb377903e5779c5f90624bbbb2"} Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.765198 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-tfssl" podStartSLOduration=1.765176633 podStartE2EDuration="1.765176633s" podCreationTimestamp="2025-11-22 09:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:35:50.735583985 +0000 UTC m=+8712.577007001" watchObservedRunningTime="2025-11-22 09:35:50.765176633 +0000 UTC m=+8712.606599639" Nov 22 09:35:50 crc kubenswrapper[4858]: I1122 09:35:50.787973 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-2e22-account-create-qkb8w" podStartSLOduration=1.787949882 podStartE2EDuration="1.787949882s" podCreationTimestamp="2025-11-22 09:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:35:50.779795941 +0000 UTC m=+8712.621218957" watchObservedRunningTime="2025-11-22 09:35:50.787949882 +0000 UTC m=+8712.629372888" Nov 22 09:35:51 crc kubenswrapper[4858]: I1122 09:35:51.734529 4858 generic.go:334] "Generic (PLEG): container finished" podID="c146df95-ce54-4862-86de-1f1612502264" containerID="03f0619fca527648354a137f8ed45941fe3b9e6ed1682842bd4cb2b6eb5ae9f6" exitCode=0 Nov 22 09:35:51 crc kubenswrapper[4858]: I1122 09:35:51.734622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-tfssl" event={"ID":"c146df95-ce54-4862-86de-1f1612502264","Type":"ContainerDied","Data":"03f0619fca527648354a137f8ed45941fe3b9e6ed1682842bd4cb2b6eb5ae9f6"} Nov 22 09:35:51 crc kubenswrapper[4858]: I1122 09:35:51.738116 4858 generic.go:334] "Generic (PLEG): container finished" podID="214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" containerID="521ad7f882964279edf7d5590bacbc8eeace3ac39a83cda6a10df740dc827350" exitCode=0 Nov 22 09:35:51 crc kubenswrapper[4858]: I1122 09:35:51.738204 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2e22-account-create-qkb8w" event={"ID":"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759","Type":"ContainerDied","Data":"521ad7f882964279edf7d5590bacbc8eeace3ac39a83cda6a10df740dc827350"} Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.340607 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.349712 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.378223 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htptz\" (UniqueName: \"kubernetes.io/projected/c146df95-ce54-4862-86de-1f1612502264-kube-api-access-htptz\") pod \"c146df95-ce54-4862-86de-1f1612502264\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.378508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c146df95-ce54-4862-86de-1f1612502264-operator-scripts\") pod \"c146df95-ce54-4862-86de-1f1612502264\" (UID: \"c146df95-ce54-4862-86de-1f1612502264\") " Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.380160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c146df95-ce54-4862-86de-1f1612502264-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c146df95-ce54-4862-86de-1f1612502264" (UID: "c146df95-ce54-4862-86de-1f1612502264"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.396710 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c146df95-ce54-4862-86de-1f1612502264-kube-api-access-htptz" (OuterVolumeSpecName: "kube-api-access-htptz") pod "c146df95-ce54-4862-86de-1f1612502264" (UID: "c146df95-ce54-4862-86de-1f1612502264"). InnerVolumeSpecName "kube-api-access-htptz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.480558 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-operator-scripts\") pod \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.480649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psbh9\" (UniqueName: \"kubernetes.io/projected/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-kube-api-access-psbh9\") pod \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\" (UID: \"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759\") " Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.481401 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" (UID: "214a7d5c-aa83-4ebc-b7dc-942ecdfdb759"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.481745 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.481773 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c146df95-ce54-4862-86de-1f1612502264-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.481788 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htptz\" (UniqueName: \"kubernetes.io/projected/c146df95-ce54-4862-86de-1f1612502264-kube-api-access-htptz\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.483637 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-kube-api-access-psbh9" (OuterVolumeSpecName: "kube-api-access-psbh9") pod "214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" (UID: "214a7d5c-aa83-4ebc-b7dc-942ecdfdb759"). InnerVolumeSpecName "kube-api-access-psbh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.583911 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psbh9\" (UniqueName: \"kubernetes.io/projected/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759-kube-api-access-psbh9\") on node \"crc\" DevicePath \"\"" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.762029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2e22-account-create-qkb8w" event={"ID":"214a7d5c-aa83-4ebc-b7dc-942ecdfdb759","Type":"ContainerDied","Data":"e2caba1ed96319596d82ce1c23b10672e27344cb377903e5779c5f90624bbbb2"} Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.762065 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2caba1ed96319596d82ce1c23b10672e27344cb377903e5779c5f90624bbbb2" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.762042 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2e22-account-create-qkb8w" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.763747 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-tfssl" Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.763747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-tfssl" event={"ID":"c146df95-ce54-4862-86de-1f1612502264","Type":"ContainerDied","Data":"010e5479968f91a77a65cb0d97d9d449967190f6799cc8e2b217f5720487c7e0"} Nov 22 09:35:53 crc kubenswrapper[4858]: I1122 09:35:53.764028 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="010e5479968f91a77a65cb0d97d9d449967190f6799cc8e2b217f5720487c7e0" Nov 22 09:35:54 crc kubenswrapper[4858]: I1122 09:35:54.536519 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:35:54 crc kubenswrapper[4858]: E1122 09:35:54.537046 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:35:58 crc kubenswrapper[4858]: I1122 09:35:58.068914 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-034d-account-create-97dqn"] Nov 22 09:35:58 crc kubenswrapper[4858]: I1122 09:35:58.082603 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xg2mb"] Nov 22 09:35:58 crc kubenswrapper[4858]: I1122 09:35:58.095843 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-034d-account-create-97dqn"] Nov 22 09:35:58 crc kubenswrapper[4858]: I1122 09:35:58.106236 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xg2mb"] Nov 22 09:35:58 crc kubenswrapper[4858]: I1122 09:35:58.825006 4858 generic.go:334] "Generic (PLEG): container finished" podID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerID="6710e42193427ea5e698492be80b243408ec95b75ef41570bcefa42cabb6bd45" exitCode=0 Nov 22 09:35:58 crc kubenswrapper[4858]: I1122 09:35:58.825083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerDied","Data":"6710e42193427ea5e698492be80b243408ec95b75ef41570bcefa42cabb6bd45"} Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.452594 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zgsvc"] Nov 22 09:35:59 crc kubenswrapper[4858]: E1122 09:35:59.453705 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c146df95-ce54-4862-86de-1f1612502264" containerName="mariadb-database-create" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.453725 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c146df95-ce54-4862-86de-1f1612502264" containerName="mariadb-database-create" Nov 22 09:35:59 crc kubenswrapper[4858]: E1122 09:35:59.453746 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" containerName="mariadb-account-create" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.453757 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" containerName="mariadb-account-create" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.454022 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" containerName="mariadb-account-create" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.454047 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c146df95-ce54-4862-86de-1f1612502264" containerName="mariadb-database-create" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.454947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.458255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.458285 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.458697 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.460746 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-nk2jv" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.468251 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zgsvc"] Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.516540 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4zz\" (UniqueName: \"kubernetes.io/projected/e2dc0863-1fed-426e-91b9-4112507cd4a2-kube-api-access-hw4zz\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.516605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-config-data\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.516867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-combined-ca-bundle\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.517149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-scripts\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.549308 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d4b2abf-6340-4188-bafd-45a37bf1b49f" path="/var/lib/kubelet/pods/0d4b2abf-6340-4188-bafd-45a37bf1b49f/volumes" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.551057 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef3e476-7f8b-4182-95c6-dd9877b2416a" path="/var/lib/kubelet/pods/eef3e476-7f8b-4182-95c6-dd9877b2416a/volumes" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.619071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw4zz\" (UniqueName: \"kubernetes.io/projected/e2dc0863-1fed-426e-91b9-4112507cd4a2-kube-api-access-hw4zz\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.619141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-config-data\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.619351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-combined-ca-bundle\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.619447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-scripts\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.623715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-combined-ca-bundle\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.624598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-scripts\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.634516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-config-data\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.641068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw4zz\" (UniqueName: \"kubernetes.io/projected/e2dc0863-1fed-426e-91b9-4112507cd4a2-kube-api-access-hw4zz\") pod \"aodh-db-sync-zgsvc\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.773545 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:35:59 crc kubenswrapper[4858]: I1122 09:35:59.848683 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerStarted","Data":"6ad6597a2759cc61aa76fd00e3a64b4ee32679b91be7e663c37976b726f4357e"} Nov 22 09:36:00 crc kubenswrapper[4858]: W1122 09:36:00.232015 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2dc0863_1fed_426e_91b9_4112507cd4a2.slice/crio-eba351942f7703b8d6dbb085129ca18e1980a7528f39786524bd0c2c49c41b4e WatchSource:0}: Error finding container eba351942f7703b8d6dbb085129ca18e1980a7528f39786524bd0c2c49c41b4e: Status 404 returned error can't find the container with id eba351942f7703b8d6dbb085129ca18e1980a7528f39786524bd0c2c49c41b4e Nov 22 09:36:00 crc kubenswrapper[4858]: I1122 09:36:00.233695 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zgsvc"] Nov 22 09:36:00 crc kubenswrapper[4858]: I1122 09:36:00.870540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zgsvc" event={"ID":"e2dc0863-1fed-426e-91b9-4112507cd4a2","Type":"ContainerStarted","Data":"eba351942f7703b8d6dbb085129ca18e1980a7528f39786524bd0c2c49c41b4e"} Nov 22 09:36:02 crc kubenswrapper[4858]: I1122 09:36:02.914574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerStarted","Data":"d2f8ef66b6a8e77f76210f4a45fe3aca5169cb0000916d8304fd25265cec38d1"} Nov 22 09:36:02 crc kubenswrapper[4858]: I1122 09:36:02.915026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerStarted","Data":"dd0d4a38c6628e6cd6833ecc0f37a9b78f79b92faf3d87c5ffac41a4d3c25c15"} Nov 22 09:36:02 crc kubenswrapper[4858]: I1122 09:36:02.954086 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.954037678 podStartE2EDuration="18.954037678s" podCreationTimestamp="2025-11-22 09:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:36:02.946834688 +0000 UTC m=+8724.788257694" watchObservedRunningTime="2025-11-22 09:36:02.954037678 +0000 UTC m=+8724.795460714" Nov 22 09:36:04 crc kubenswrapper[4858]: I1122 09:36:04.744803 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 09:36:05 crc kubenswrapper[4858]: I1122 09:36:05.657679 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 09:36:05 crc kubenswrapper[4858]: I1122 09:36:05.945603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zgsvc" event={"ID":"e2dc0863-1fed-426e-91b9-4112507cd4a2","Type":"ContainerStarted","Data":"0eda1c6ea33c6848a009c3dc95830f5d0706331c8f0491fb87c141c53a0cbe4c"} Nov 22 09:36:05 crc kubenswrapper[4858]: I1122 09:36:05.970253 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zgsvc" podStartSLOduration=1.9118863529999999 podStartE2EDuration="6.970237675s" podCreationTimestamp="2025-11-22 09:35:59 +0000 UTC" firstStartedPulling="2025-11-22 09:36:00.234559327 +0000 UTC m=+8722.075982343" lastFinishedPulling="2025-11-22 09:36:05.292910659 +0000 UTC m=+8727.134333665" observedRunningTime="2025-11-22 09:36:05.96788005 +0000 UTC m=+8727.809303076" watchObservedRunningTime="2025-11-22 09:36:05.970237675 +0000 UTC m=+8727.811660681" Nov 22 09:36:08 crc kubenswrapper[4858]: I1122 09:36:08.535777 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:36:08 crc kubenswrapper[4858]: E1122 09:36:08.536134 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.329416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.329951 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" containerName="kube-state-metrics" containerID="cri-o://9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97" gracePeriod=30 Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.871663 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.956400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm5q9\" (UniqueName: \"kubernetes.io/projected/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96-kube-api-access-sm5q9\") pod \"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96\" (UID: \"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96\") " Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.964856 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96-kube-api-access-sm5q9" (OuterVolumeSpecName: "kube-api-access-sm5q9") pod "9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" (UID: "9d252548-f4f3-4f94-84aa-4d2bf4ecbf96"). InnerVolumeSpecName "kube-api-access-sm5q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.988659 4858 generic.go:334] "Generic (PLEG): container finished" podID="9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" containerID="9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97" exitCode=2 Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.988718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96","Type":"ContainerDied","Data":"9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97"} Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.988748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9d252548-f4f3-4f94-84aa-4d2bf4ecbf96","Type":"ContainerDied","Data":"911ef0a9211bfa9ce3e32180986f79064225063dcaaa36aaa65cf7940154643f"} Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.988766 4858 scope.go:117] "RemoveContainer" containerID="9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97" Nov 22 09:36:09 crc kubenswrapper[4858]: I1122 09:36:09.988895 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.024494 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.025713 4858 scope.go:117] "RemoveContainer" containerID="9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97" Nov 22 09:36:10 crc kubenswrapper[4858]: E1122 09:36:10.027133 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97\": container with ID starting with 9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97 not found: ID does not exist" containerID="9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.027168 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97"} err="failed to get container status \"9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97\": rpc error: code = NotFound desc = could not find container \"9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97\": container with ID starting with 9de94b8a6495d2f8a8bd18cbf1627444297a71480ba5ccdfc90915fb325a2b97 not found: ID does not exist" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.033698 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.061796 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:36:10 crc kubenswrapper[4858]: E1122 09:36:10.062293 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" containerName="kube-state-metrics" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.062309 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" containerName="kube-state-metrics" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.062520 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" containerName="kube-state-metrics" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.063036 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm5q9\" (UniqueName: \"kubernetes.io/projected/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96-kube-api-access-sm5q9\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.063236 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.067622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.067706 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.095447 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.166084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.166190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqrl\" (UniqueName: \"kubernetes.io/projected/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-api-access-qjqrl\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.166516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.166573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.269441 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.269518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.269638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.269844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqrl\" (UniqueName: \"kubernetes.io/projected/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-api-access-qjqrl\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.275948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.277805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.286166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.289443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqrl\" (UniqueName: \"kubernetes.io/projected/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-api-access-qjqrl\") pod \"kube-state-metrics-0\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.398444 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.900827 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:36:10 crc kubenswrapper[4858]: W1122 09:36:10.908980 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf713f2_824f_4d23_bb3a_1b1f7ef99020.slice/crio-373f8ea53fb8392f4d52049d9a81199da932aa3fd6137c5e8f6369312b58f59b WatchSource:0}: Error finding container 373f8ea53fb8392f4d52049d9a81199da932aa3fd6137c5e8f6369312b58f59b: Status 404 returned error can't find the container with id 373f8ea53fb8392f4d52049d9a81199da932aa3fd6137c5e8f6369312b58f59b Nov 22 09:36:10 crc kubenswrapper[4858]: I1122 09:36:10.911843 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.000091 4858 generic.go:334] "Generic (PLEG): container finished" podID="e2dc0863-1fed-426e-91b9-4112507cd4a2" containerID="0eda1c6ea33c6848a009c3dc95830f5d0706331c8f0491fb87c141c53a0cbe4c" exitCode=0 Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.000203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zgsvc" event={"ID":"e2dc0863-1fed-426e-91b9-4112507cd4a2","Type":"ContainerDied","Data":"0eda1c6ea33c6848a009c3dc95830f5d0706331c8f0491fb87c141c53a0cbe4c"} Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.001834 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cf713f2-824f-4d23-bb3a-1b1f7ef99020","Type":"ContainerStarted","Data":"373f8ea53fb8392f4d52049d9a81199da932aa3fd6137c5e8f6369312b58f59b"} Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.395619 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.395936 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-central-agent" containerID="cri-o://cb59e9eb819d4ee5cda01a43d315223b4602645119f72fdc8a24888a01bdfb81" gracePeriod=30 Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.396231 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="proxy-httpd" containerID="cri-o://0539743122e295683803747a5ffd6188d343eb46dbf55fd94ecbff7626b3a961" gracePeriod=30 Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.396313 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-notification-agent" containerID="cri-o://a3f1788a26f64a5f11db8f702ac6c51bf557105ff0ed9904fe2452c139073ad1" gracePeriod=30 Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.396365 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="sg-core" containerID="cri-o://f1676e8037e7dd2174352a488d5cc89f6d0b0e9e91773be6d80e7335a9f7469f" gracePeriod=30 Nov 22 09:36:11 crc kubenswrapper[4858]: I1122 09:36:11.547509 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d252548-f4f3-4f94-84aa-4d2bf4ecbf96" path="/var/lib/kubelet/pods/9d252548-f4f3-4f94-84aa-4d2bf4ecbf96/volumes" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.016437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cf713f2-824f-4d23-bb3a-1b1f7ef99020","Type":"ContainerStarted","Data":"cd4ac53c7c037b114448c74d1fb5ca115e64028fa709acf595b4d3e033563293"} Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.018862 4858 generic.go:334] "Generic (PLEG): container finished" podID="043bb005-598c-40a0-8519-54f08e426c13" containerID="0539743122e295683803747a5ffd6188d343eb46dbf55fd94ecbff7626b3a961" exitCode=0 Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.018881 4858 generic.go:334] "Generic (PLEG): container finished" podID="043bb005-598c-40a0-8519-54f08e426c13" containerID="f1676e8037e7dd2174352a488d5cc89f6d0b0e9e91773be6d80e7335a9f7469f" exitCode=2 Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.018889 4858 generic.go:334] "Generic (PLEG): container finished" podID="043bb005-598c-40a0-8519-54f08e426c13" containerID="cb59e9eb819d4ee5cda01a43d315223b4602645119f72fdc8a24888a01bdfb81" exitCode=0 Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.019052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerDied","Data":"0539743122e295683803747a5ffd6188d343eb46dbf55fd94ecbff7626b3a961"} Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.019626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerDied","Data":"f1676e8037e7dd2174352a488d5cc89f6d0b0e9e91773be6d80e7335a9f7469f"} Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.019651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerDied","Data":"cb59e9eb819d4ee5cda01a43d315223b4602645119f72fdc8a24888a01bdfb81"} Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.349419 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.422348 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-config-data\") pod \"e2dc0863-1fed-426e-91b9-4112507cd4a2\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.422534 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-scripts\") pod \"e2dc0863-1fed-426e-91b9-4112507cd4a2\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.422585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw4zz\" (UniqueName: \"kubernetes.io/projected/e2dc0863-1fed-426e-91b9-4112507cd4a2-kube-api-access-hw4zz\") pod \"e2dc0863-1fed-426e-91b9-4112507cd4a2\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.422668 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-combined-ca-bundle\") pod \"e2dc0863-1fed-426e-91b9-4112507cd4a2\" (UID: \"e2dc0863-1fed-426e-91b9-4112507cd4a2\") " Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.427683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-scripts" (OuterVolumeSpecName: "scripts") pod "e2dc0863-1fed-426e-91b9-4112507cd4a2" (UID: "e2dc0863-1fed-426e-91b9-4112507cd4a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.428041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2dc0863-1fed-426e-91b9-4112507cd4a2-kube-api-access-hw4zz" (OuterVolumeSpecName: "kube-api-access-hw4zz") pod "e2dc0863-1fed-426e-91b9-4112507cd4a2" (UID: "e2dc0863-1fed-426e-91b9-4112507cd4a2"). InnerVolumeSpecName "kube-api-access-hw4zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.451087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-config-data" (OuterVolumeSpecName: "config-data") pod "e2dc0863-1fed-426e-91b9-4112507cd4a2" (UID: "e2dc0863-1fed-426e-91b9-4112507cd4a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.466978 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2dc0863-1fed-426e-91b9-4112507cd4a2" (UID: "e2dc0863-1fed-426e-91b9-4112507cd4a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.525233 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw4zz\" (UniqueName: \"kubernetes.io/projected/e2dc0863-1fed-426e-91b9-4112507cd4a2-kube-api-access-hw4zz\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.525290 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.525312 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:12 crc kubenswrapper[4858]: I1122 09:36:12.525349 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2dc0863-1fed-426e-91b9-4112507cd4a2-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:13 crc kubenswrapper[4858]: I1122 09:36:13.031538 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zgsvc" Nov 22 09:36:13 crc kubenswrapper[4858]: I1122 09:36:13.031567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zgsvc" event={"ID":"e2dc0863-1fed-426e-91b9-4112507cd4a2","Type":"ContainerDied","Data":"eba351942f7703b8d6dbb085129ca18e1980a7528f39786524bd0c2c49c41b4e"} Nov 22 09:36:13 crc kubenswrapper[4858]: I1122 09:36:13.032487 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba351942f7703b8d6dbb085129ca18e1980a7528f39786524bd0c2c49c41b4e" Nov 22 09:36:13 crc kubenswrapper[4858]: I1122 09:36:13.032528 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 09:36:13 crc kubenswrapper[4858]: I1122 09:36:13.098052 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.383628701 podStartE2EDuration="3.098029363s" podCreationTimestamp="2025-11-22 09:36:10 +0000 UTC" firstStartedPulling="2025-11-22 09:36:10.911627933 +0000 UTC m=+8732.753050939" lastFinishedPulling="2025-11-22 09:36:11.626028595 +0000 UTC m=+8733.467451601" observedRunningTime="2025-11-22 09:36:13.062267338 +0000 UTC m=+8734.903690384" watchObservedRunningTime="2025-11-22 09:36:13.098029363 +0000 UTC m=+8734.939452379" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.206178 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:14 crc kubenswrapper[4858]: E1122 09:36:14.206634 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2dc0863-1fed-426e-91b9-4112507cd4a2" containerName="aodh-db-sync" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.206648 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2dc0863-1fed-426e-91b9-4112507cd4a2" containerName="aodh-db-sync" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.206825 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2dc0863-1fed-426e-91b9-4112507cd4a2" containerName="aodh-db-sync" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.209256 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.214980 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-nk2jv" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.218806 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.219070 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.241950 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.395599 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpw5p\" (UniqueName: \"kubernetes.io/projected/14f95faa-8695-4bd8-9c38-9cb92f778f5c-kube-api-access-fpw5p\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.395654 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-scripts\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.395676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.395707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-config-data\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.498125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpw5p\" (UniqueName: \"kubernetes.io/projected/14f95faa-8695-4bd8-9c38-9cb92f778f5c-kube-api-access-fpw5p\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.498168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-scripts\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.498189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.498213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-config-data\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.503676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.504023 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-config-data\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.512805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-scripts\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.530260 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpw5p\" (UniqueName: \"kubernetes.io/projected/14f95faa-8695-4bd8-9c38-9cb92f778f5c-kube-api-access-fpw5p\") pod \"aodh-0\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " pod="openstack/aodh-0" Nov 22 09:36:14 crc kubenswrapper[4858]: I1122 09:36:14.549853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.072241 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:15 crc kubenswrapper[4858]: W1122 09:36:15.081988 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f95faa_8695_4bd8_9c38_9cb92f778f5c.slice/crio-967cb193665f533158d9ef3b7b455c05fbf5f4b5981ca7ba940c1f14a6c28b50 WatchSource:0}: Error finding container 967cb193665f533158d9ef3b7b455c05fbf5f4b5981ca7ba940c1f14a6c28b50: Status 404 returned error can't find the container with id 967cb193665f533158d9ef3b7b455c05fbf5f4b5981ca7ba940c1f14a6c28b50 Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.128037 4858 generic.go:334] "Generic (PLEG): container finished" podID="043bb005-598c-40a0-8519-54f08e426c13" containerID="a3f1788a26f64a5f11db8f702ac6c51bf557105ff0ed9904fe2452c139073ad1" exitCode=0 Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.128427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerDied","Data":"a3f1788a26f64a5f11db8f702ac6c51bf557105ff0ed9904fe2452c139073ad1"} Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.131168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerStarted","Data":"967cb193665f533158d9ef3b7b455c05fbf5f4b5981ca7ba940c1f14a6c28b50"} Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.363378 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.430155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-run-httpd\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.430945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.431068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-config-data\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.431847 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-sg-core-conf-yaml\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.436881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8j8w\" (UniqueName: \"kubernetes.io/projected/043bb005-598c-40a0-8519-54f08e426c13-kube-api-access-k8j8w\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.437105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-scripts\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.437161 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-log-httpd\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.437184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-combined-ca-bundle\") pod \"043bb005-598c-40a0-8519-54f08e426c13\" (UID: \"043bb005-598c-40a0-8519-54f08e426c13\") " Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.438126 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.442451 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.447555 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043bb005-598c-40a0-8519-54f08e426c13-kube-api-access-k8j8w" (OuterVolumeSpecName: "kube-api-access-k8j8w") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "kube-api-access-k8j8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.456132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-scripts" (OuterVolumeSpecName: "scripts") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.486805 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.539879 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.539909 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8j8w\" (UniqueName: \"kubernetes.io/projected/043bb005-598c-40a0-8519-54f08e426c13-kube-api-access-k8j8w\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.539920 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.539931 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043bb005-598c-40a0-8519-54f08e426c13-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.636555 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.650700 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.674451 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.693677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-config-data" (OuterVolumeSpecName: "config-data") pod "043bb005-598c-40a0-8519-54f08e426c13" (UID: "043bb005-598c-40a0-8519-54f08e426c13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.698542 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 09:36:15 crc kubenswrapper[4858]: I1122 09:36:15.752701 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043bb005-598c-40a0-8519-54f08e426c13-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.146609 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.147539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043bb005-598c-40a0-8519-54f08e426c13","Type":"ContainerDied","Data":"eb3cf85dfbda8dbf92a3c274af720ab98a0f0daf83dcda1780181677f7c27306"} Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.147597 4858 scope.go:117] "RemoveContainer" containerID="0539743122e295683803747a5ffd6188d343eb46dbf55fd94ecbff7626b3a961" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.165513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.188332 4858 scope.go:117] "RemoveContainer" containerID="f1676e8037e7dd2174352a488d5cc89f6d0b0e9e91773be6d80e7335a9f7469f" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.237524 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.238765 4858 scope.go:117] "RemoveContainer" containerID="a3f1788a26f64a5f11db8f702ac6c51bf557105ff0ed9904fe2452c139073ad1" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.250826 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272259 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:16 crc kubenswrapper[4858]: E1122 09:36:16.272675 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="sg-core" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272687 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="sg-core" Nov 22 09:36:16 crc kubenswrapper[4858]: E1122 09:36:16.272717 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="proxy-httpd" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272724 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="proxy-httpd" Nov 22 09:36:16 crc kubenswrapper[4858]: E1122 09:36:16.272737 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-notification-agent" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272743 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-notification-agent" Nov 22 09:36:16 crc kubenswrapper[4858]: E1122 09:36:16.272759 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-central-agent" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272765 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-central-agent" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272936 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="proxy-httpd" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272949 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="sg-core" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272968 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-central-agent" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.272986 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="043bb005-598c-40a0-8519-54f08e426c13" containerName="ceilometer-notification-agent" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.277083 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.285534 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.285862 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.285972 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.289492 4858 scope.go:117] "RemoveContainer" containerID="cb59e9eb819d4ee5cda01a43d315223b4602645119f72fdc8a24888a01bdfb81" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.289972 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.367630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.367682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.367717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-log-httpd\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.367761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-run-httpd\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.367781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-config-data\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.367808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4d22\" (UniqueName: \"kubernetes.io/projected/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-kube-api-access-x4d22\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.368025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.368050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-scripts\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469549 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-scripts\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469680 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-log-httpd\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-run-httpd\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469783 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-config-data\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.469807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4d22\" (UniqueName: \"kubernetes.io/projected/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-kube-api-access-x4d22\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.470942 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-log-httpd\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.474958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-run-httpd\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.474994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.475266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.478114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-config-data\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.482831 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.485210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-scripts\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.506037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4d22\" (UniqueName: \"kubernetes.io/projected/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-kube-api-access-x4d22\") pod \"ceilometer-0\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " pod="openstack/ceilometer-0" Nov 22 09:36:16 crc kubenswrapper[4858]: I1122 09:36:16.615693 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:17 crc kubenswrapper[4858]: I1122 09:36:17.164377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerStarted","Data":"0f8e7e54ab67057e9b22a142ca3cd3b9163caeab6380ca24f020c0a9dddef1cf"} Nov 22 09:36:17 crc kubenswrapper[4858]: I1122 09:36:17.168736 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:17 crc kubenswrapper[4858]: I1122 09:36:17.274830 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:17 crc kubenswrapper[4858]: I1122 09:36:17.547199 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043bb005-598c-40a0-8519-54f08e426c13" path="/var/lib/kubelet/pods/043bb005-598c-40a0-8519-54f08e426c13/volumes" Nov 22 09:36:17 crc kubenswrapper[4858]: I1122 09:36:17.649910 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:18 crc kubenswrapper[4858]: I1122 09:36:18.174581 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerStarted","Data":"ae544805bd63e7688ec9a569c818a16392cdc20e35461ef4e09a484d18ddf4d1"} Nov 22 09:36:20 crc kubenswrapper[4858]: I1122 09:36:20.194727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerStarted","Data":"6ed46d6f1475ee32bbddd0f726191cb3d3a6e076406eface100b396489eff571"} Nov 22 09:36:20 crc kubenswrapper[4858]: I1122 09:36:20.423941 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 09:36:21 crc kubenswrapper[4858]: I1122 09:36:21.207407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerStarted","Data":"1d495f984f3b6d945657ce7ae038cd0cbfe972d74f71f76a65e1bccd999fbef1"} Nov 22 09:36:22 crc kubenswrapper[4858]: I1122 09:36:22.218614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerStarted","Data":"a6e37684167fd056feaa460bbf10b944fd1f211dd8ea4db747808b2898c0f6e3"} Nov 22 09:36:22 crc kubenswrapper[4858]: I1122 09:36:22.220825 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerStarted","Data":"ff50f981b1a69ada72d51970f100f9bf0596290f104df5aa7eaf652d9b415651"} Nov 22 09:36:22 crc kubenswrapper[4858]: I1122 09:36:22.537823 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:36:22 crc kubenswrapper[4858]: E1122 09:36:22.538048 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.243887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerStarted","Data":"2c446ad205673e8826f4a4332c472f58d2158a924a6184ab086f32953d2b1ca7"} Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.244646 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-api" containerID="cri-o://0f8e7e54ab67057e9b22a142ca3cd3b9163caeab6380ca24f020c0a9dddef1cf" gracePeriod=30 Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.245149 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-listener" containerID="cri-o://2c446ad205673e8826f4a4332c472f58d2158a924a6184ab086f32953d2b1ca7" gracePeriod=30 Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.245196 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-notifier" containerID="cri-o://a6e37684167fd056feaa460bbf10b944fd1f211dd8ea4db747808b2898c0f6e3" gracePeriod=30 Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.245241 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-evaluator" containerID="cri-o://6ed46d6f1475ee32bbddd0f726191cb3d3a6e076406eface100b396489eff571" gracePeriod=30 Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.257853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerStarted","Data":"b698962619d69c20b5cd80cf5156ce5af3148b4c07a5155a8300590823d84fcb"} Nov 22 09:36:24 crc kubenswrapper[4858]: I1122 09:36:24.271523 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.6009621840000001 podStartE2EDuration="10.271505034s" podCreationTimestamp="2025-11-22 09:36:14 +0000 UTC" firstStartedPulling="2025-11-22 09:36:15.084701492 +0000 UTC m=+8736.926124498" lastFinishedPulling="2025-11-22 09:36:23.755244342 +0000 UTC m=+8745.596667348" observedRunningTime="2025-11-22 09:36:24.264702876 +0000 UTC m=+8746.106125882" watchObservedRunningTime="2025-11-22 09:36:24.271505034 +0000 UTC m=+8746.112928040" Nov 22 09:36:25 crc kubenswrapper[4858]: I1122 09:36:25.282115 4858 generic.go:334] "Generic (PLEG): container finished" podID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerID="a6e37684167fd056feaa460bbf10b944fd1f211dd8ea4db747808b2898c0f6e3" exitCode=0 Nov 22 09:36:25 crc kubenswrapper[4858]: I1122 09:36:25.282506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerDied","Data":"a6e37684167fd056feaa460bbf10b944fd1f211dd8ea4db747808b2898c0f6e3"} Nov 22 09:36:25 crc kubenswrapper[4858]: I1122 09:36:25.282532 4858 generic.go:334] "Generic (PLEG): container finished" podID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerID="6ed46d6f1475ee32bbddd0f726191cb3d3a6e076406eface100b396489eff571" exitCode=0 Nov 22 09:36:25 crc kubenswrapper[4858]: I1122 09:36:25.282545 4858 generic.go:334] "Generic (PLEG): container finished" podID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerID="0f8e7e54ab67057e9b22a142ca3cd3b9163caeab6380ca24f020c0a9dddef1cf" exitCode=0 Nov 22 09:36:25 crc kubenswrapper[4858]: I1122 09:36:25.282549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerDied","Data":"6ed46d6f1475ee32bbddd0f726191cb3d3a6e076406eface100b396489eff571"} Nov 22 09:36:25 crc kubenswrapper[4858]: I1122 09:36:25.282580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerDied","Data":"0f8e7e54ab67057e9b22a142ca3cd3b9163caeab6380ca24f020c0a9dddef1cf"} Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.293074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerStarted","Data":"188789009ba7b19d7c56b194dc53ea5ec84691a4eb4c660fde5020fadaf0d481"} Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.293615 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-central-agent" containerID="cri-o://1d495f984f3b6d945657ce7ae038cd0cbfe972d74f71f76a65e1bccd999fbef1" gracePeriod=30 Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.293850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.293877 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="sg-core" containerID="cri-o://b698962619d69c20b5cd80cf5156ce5af3148b4c07a5155a8300590823d84fcb" gracePeriod=30 Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.293940 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="proxy-httpd" containerID="cri-o://188789009ba7b19d7c56b194dc53ea5ec84691a4eb4c660fde5020fadaf0d481" gracePeriod=30 Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.293905 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-notification-agent" containerID="cri-o://ff50f981b1a69ada72d51970f100f9bf0596290f104df5aa7eaf652d9b415651" gracePeriod=30 Nov 22 09:36:26 crc kubenswrapper[4858]: I1122 09:36:26.318389 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.355218056 podStartE2EDuration="10.318368568s" podCreationTimestamp="2025-11-22 09:36:16 +0000 UTC" firstStartedPulling="2025-11-22 09:36:17.284499431 +0000 UTC m=+8739.125922437" lastFinishedPulling="2025-11-22 09:36:25.247649943 +0000 UTC m=+8747.089072949" observedRunningTime="2025-11-22 09:36:26.317685887 +0000 UTC m=+8748.159108903" watchObservedRunningTime="2025-11-22 09:36:26.318368568 +0000 UTC m=+8748.159791574" Nov 22 09:36:27 crc kubenswrapper[4858]: I1122 09:36:27.307170 4858 generic.go:334] "Generic (PLEG): container finished" podID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerID="188789009ba7b19d7c56b194dc53ea5ec84691a4eb4c660fde5020fadaf0d481" exitCode=0 Nov 22 09:36:27 crc kubenswrapper[4858]: I1122 09:36:27.307210 4858 generic.go:334] "Generic (PLEG): container finished" podID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerID="b698962619d69c20b5cd80cf5156ce5af3148b4c07a5155a8300590823d84fcb" exitCode=2 Nov 22 09:36:27 crc kubenswrapper[4858]: I1122 09:36:27.307222 4858 generic.go:334] "Generic (PLEG): container finished" podID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerID="ff50f981b1a69ada72d51970f100f9bf0596290f104df5aa7eaf652d9b415651" exitCode=0 Nov 22 09:36:27 crc kubenswrapper[4858]: I1122 09:36:27.307249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerDied","Data":"188789009ba7b19d7c56b194dc53ea5ec84691a4eb4c660fde5020fadaf0d481"} Nov 22 09:36:27 crc kubenswrapper[4858]: I1122 09:36:27.307303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerDied","Data":"b698962619d69c20b5cd80cf5156ce5af3148b4c07a5155a8300590823d84fcb"} Nov 22 09:36:27 crc kubenswrapper[4858]: I1122 09:36:27.307336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerDied","Data":"ff50f981b1a69ada72d51970f100f9bf0596290f104df5aa7eaf652d9b415651"} Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.320962 4858 generic.go:334] "Generic (PLEG): container finished" podID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerID="1d495f984f3b6d945657ce7ae038cd0cbfe972d74f71f76a65e1bccd999fbef1" exitCode=0 Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.321056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerDied","Data":"1d495f984f3b6d945657ce7ae038cd0cbfe972d74f71f76a65e1bccd999fbef1"} Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.642107 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.681949 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-scripts\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.682090 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4d22\" (UniqueName: \"kubernetes.io/projected/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-kube-api-access-x4d22\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.682168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-ceilometer-tls-certs\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.683167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-combined-ca-bundle\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.683300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-sg-core-conf-yaml\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.683378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-log-httpd\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.683492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-run-httpd\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.683573 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-config-data\") pod \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\" (UID: \"dc1533c2-e7a0-4708-9ca5-f73fce7e4655\") " Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.684158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.684524 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.684683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.689098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-kube-api-access-x4d22" (OuterVolumeSpecName: "kube-api-access-x4d22") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "kube-api-access-x4d22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.696094 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-scripts" (OuterVolumeSpecName: "scripts") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.717714 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.760662 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.776479 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.786575 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.786610 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.786622 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4d22\" (UniqueName: \"kubernetes.io/projected/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-kube-api-access-x4d22\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.786633 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.786644 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.786654 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.808876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-config-data" (OuterVolumeSpecName: "config-data") pod "dc1533c2-e7a0-4708-9ca5-f73fce7e4655" (UID: "dc1533c2-e7a0-4708-9ca5-f73fce7e4655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:28 crc kubenswrapper[4858]: I1122 09:36:28.888495 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1533c2-e7a0-4708-9ca5-f73fce7e4655-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.333748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc1533c2-e7a0-4708-9ca5-f73fce7e4655","Type":"ContainerDied","Data":"ae544805bd63e7688ec9a569c818a16392cdc20e35461ef4e09a484d18ddf4d1"} Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.334084 4858 scope.go:117] "RemoveContainer" containerID="188789009ba7b19d7c56b194dc53ea5ec84691a4eb4c660fde5020fadaf0d481" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.333825 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.360919 4858 scope.go:117] "RemoveContainer" containerID="b698962619d69c20b5cd80cf5156ce5af3148b4c07a5155a8300590823d84fcb" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.384554 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.398093 4858 scope.go:117] "RemoveContainer" containerID="ff50f981b1a69ada72d51970f100f9bf0596290f104df5aa7eaf652d9b415651" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.406400 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.423677 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:29 crc kubenswrapper[4858]: E1122 09:36:29.424138 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="proxy-httpd" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424160 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="proxy-httpd" Nov 22 09:36:29 crc kubenswrapper[4858]: E1122 09:36:29.424180 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-notification-agent" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424187 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-notification-agent" Nov 22 09:36:29 crc kubenswrapper[4858]: E1122 09:36:29.424206 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="sg-core" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424211 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="sg-core" Nov 22 09:36:29 crc kubenswrapper[4858]: E1122 09:36:29.424229 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-central-agent" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424235 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-central-agent" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424465 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-central-agent" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424479 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="sg-core" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424488 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="ceilometer-notification-agent" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.424498 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" containerName="proxy-httpd" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.426339 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.428039 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.428580 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.438838 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.444214 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.447037 4858 scope.go:117] "RemoveContainer" containerID="1d495f984f3b6d945657ce7ae038cd0cbfe972d74f71f76a65e1bccd999fbef1" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.501831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt68j\" (UniqueName: \"kubernetes.io/projected/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-kube-api-access-jt68j\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.501908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-run-httpd\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.501942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.501977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-scripts\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.502002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-config-data\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.502119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.502288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.502401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-log-httpd\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.552275 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1533c2-e7a0-4708-9ca5-f73fce7e4655" path="/var/lib/kubelet/pods/dc1533c2-e7a0-4708-9ca5-f73fce7e4655/volumes" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604002 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-log-httpd\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt68j\" (UniqueName: \"kubernetes.io/projected/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-kube-api-access-jt68j\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-run-httpd\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604183 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-scripts\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-config-data\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.604343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.607408 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-run-httpd\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.607971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-log-httpd\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.613900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.617388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-scripts\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.622008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.625391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-config-data\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.637091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.637565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt68j\" (UniqueName: \"kubernetes.io/projected/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-kube-api-access-jt68j\") pod \"ceilometer-0\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " pod="openstack/ceilometer-0" Nov 22 09:36:29 crc kubenswrapper[4858]: I1122 09:36:29.760601 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:36:30 crc kubenswrapper[4858]: I1122 09:36:30.251680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:36:30 crc kubenswrapper[4858]: I1122 09:36:30.347086 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerStarted","Data":"17559e33db3a210ab25ba541c0fefc0f1f394263d66cd03501c6412f059d82a6"} Nov 22 09:36:31 crc kubenswrapper[4858]: I1122 09:36:31.361135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerStarted","Data":"921a60c315076bfa09bfff124ab92deecdc0625f09d81b3bb232d4ef1e293e81"} Nov 22 09:36:32 crc kubenswrapper[4858]: I1122 09:36:32.375771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerStarted","Data":"37be865a00cf89c403b4aeab789ef0fd27e0c3496d6c037ceca384efb5e151a4"} Nov 22 09:36:33 crc kubenswrapper[4858]: I1122 09:36:33.399794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerStarted","Data":"c025194ddf7a068b573c198cafa5d2010ef4df2be27ccc43f8c168cace634da0"} Nov 22 09:36:34 crc kubenswrapper[4858]: I1122 09:36:34.414649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerStarted","Data":"7c22c1647b976812b9a9e2e33c7532864b50fff449effa75e59831dd2b9c3c8f"} Nov 22 09:36:34 crc kubenswrapper[4858]: I1122 09:36:34.415449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 09:36:34 crc kubenswrapper[4858]: I1122 09:36:34.447343 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.434308682 podStartE2EDuration="5.447302836s" podCreationTimestamp="2025-11-22 09:36:29 +0000 UTC" firstStartedPulling="2025-11-22 09:36:30.260122285 +0000 UTC m=+8752.101545301" lastFinishedPulling="2025-11-22 09:36:33.273116459 +0000 UTC m=+8755.114539455" observedRunningTime="2025-11-22 09:36:34.430752456 +0000 UTC m=+8756.272175502" watchObservedRunningTime="2025-11-22 09:36:34.447302836 +0000 UTC m=+8756.288725862" Nov 22 09:36:35 crc kubenswrapper[4858]: I1122 09:36:35.536246 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:36:35 crc kubenswrapper[4858]: E1122 09:36:35.536743 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:36:40 crc kubenswrapper[4858]: I1122 09:36:40.068732 4858 scope.go:117] "RemoveContainer" containerID="91f9622b223adaca3b33950c0f7e0f897a2452d73cb4562eab021629c98ec8c4" Nov 22 09:36:40 crc kubenswrapper[4858]: I1122 09:36:40.105133 4858 scope.go:117] "RemoveContainer" containerID="ef464a3f7686d41a4099830d4c38a9d573796b3dd992d56d2f045b8d89ef85d0" Nov 22 09:36:40 crc kubenswrapper[4858]: I1122 09:36:40.166114 4858 scope.go:117] "RemoveContainer" containerID="6e096e4a2ae7155d6d8c46d23f8467defc40e77a3fc4dbf0e562a5a006031139" Nov 22 09:36:40 crc kubenswrapper[4858]: I1122 09:36:40.187183 4858 scope.go:117] "RemoveContainer" containerID="092e0687587aa39bb5e015bb8950c121b83412c6e819836040b6de00604dd898" Nov 22 09:36:44 crc kubenswrapper[4858]: I1122 09:36:44.044183 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mpk4l"] Nov 22 09:36:44 crc kubenswrapper[4858]: I1122 09:36:44.065181 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mpk4l"] Nov 22 09:36:45 crc kubenswrapper[4858]: I1122 09:36:45.560066 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8fe1a0-6f1a-4ac5-b3b4-871336c73852" path="/var/lib/kubelet/pods/ac8fe1a0-6f1a-4ac5-b3b4-871336c73852/volumes" Nov 22 09:36:46 crc kubenswrapper[4858]: I1122 09:36:46.535473 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:36:46 crc kubenswrapper[4858]: E1122 09:36:46.536387 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:36:54 crc kubenswrapper[4858]: E1122 09:36:54.602616 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f95faa_8695_4bd8_9c38_9cb92f778f5c.slice/crio-conmon-2c446ad205673e8826f4a4332c472f58d2158a924a6184ab086f32953d2b1ca7.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.652914 4858 generic.go:334] "Generic (PLEG): container finished" podID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerID="2c446ad205673e8826f4a4332c472f58d2158a924a6184ab086f32953d2b1ca7" exitCode=137 Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.652966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerDied","Data":"2c446ad205673e8826f4a4332c472f58d2158a924a6184ab086f32953d2b1ca7"} Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.820933 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.975804 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-combined-ca-bundle\") pod \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.975921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-scripts\") pod \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.976016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-config-data\") pod \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.976056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpw5p\" (UniqueName: \"kubernetes.io/projected/14f95faa-8695-4bd8-9c38-9cb92f778f5c-kube-api-access-fpw5p\") pod \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\" (UID: \"14f95faa-8695-4bd8-9c38-9cb92f778f5c\") " Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.984157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f95faa-8695-4bd8-9c38-9cb92f778f5c-kube-api-access-fpw5p" (OuterVolumeSpecName: "kube-api-access-fpw5p") pod "14f95faa-8695-4bd8-9c38-9cb92f778f5c" (UID: "14f95faa-8695-4bd8-9c38-9cb92f778f5c"). InnerVolumeSpecName "kube-api-access-fpw5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:54 crc kubenswrapper[4858]: I1122 09:36:54.985036 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-scripts" (OuterVolumeSpecName: "scripts") pod "14f95faa-8695-4bd8-9c38-9cb92f778f5c" (UID: "14f95faa-8695-4bd8-9c38-9cb92f778f5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.078905 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.079284 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpw5p\" (UniqueName: \"kubernetes.io/projected/14f95faa-8695-4bd8-9c38-9cb92f778f5c-kube-api-access-fpw5p\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.128704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-config-data" (OuterVolumeSpecName: "config-data") pod "14f95faa-8695-4bd8-9c38-9cb92f778f5c" (UID: "14f95faa-8695-4bd8-9c38-9cb92f778f5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.140849 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14f95faa-8695-4bd8-9c38-9cb92f778f5c" (UID: "14f95faa-8695-4bd8-9c38-9cb92f778f5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.181045 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.181124 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f95faa-8695-4bd8-9c38-9cb92f778f5c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.665646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f95faa-8695-4bd8-9c38-9cb92f778f5c","Type":"ContainerDied","Data":"967cb193665f533158d9ef3b7b455c05fbf5f4b5981ca7ba940c1f14a6c28b50"} Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.665718 4858 scope.go:117] "RemoveContainer" containerID="2c446ad205673e8826f4a4332c472f58d2158a924a6184ab086f32953d2b1ca7" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.665768 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.692517 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.703071 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.710521 4858 scope.go:117] "RemoveContainer" containerID="a6e37684167fd056feaa460bbf10b944fd1f211dd8ea4db747808b2898c0f6e3" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.726557 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:55 crc kubenswrapper[4858]: E1122 09:36:55.727410 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-api" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.727529 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-api" Nov 22 09:36:55 crc kubenswrapper[4858]: E1122 09:36:55.727652 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-evaluator" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.727740 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-evaluator" Nov 22 09:36:55 crc kubenswrapper[4858]: E1122 09:36:55.727824 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-notifier" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.727908 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-notifier" Nov 22 09:36:55 crc kubenswrapper[4858]: E1122 09:36:55.728011 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-listener" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.728097 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-listener" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.728460 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-evaluator" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.729749 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-listener" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.729869 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-api" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.729988 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" containerName="aodh-notifier" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.732605 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.735875 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-nk2jv" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.736240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.736460 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.736594 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.737236 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.738588 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.740226 4858 scope.go:117] "RemoveContainer" containerID="6ed46d6f1475ee32bbddd0f726191cb3d3a6e076406eface100b396489eff571" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.787655 4858 scope.go:117] "RemoveContainer" containerID="0f8e7e54ab67057e9b22a142ca3cd3b9163caeab6380ca24f020c0a9dddef1cf" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.897896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-scripts\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.898009 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2fvb\" (UniqueName: \"kubernetes.io/projected/555e309c-8c41-4ac1-8eca-60e203f92e4e-kube-api-access-d2fvb\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.898117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.898232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-internal-tls-certs\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.898404 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-public-tls-certs\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:55 crc kubenswrapper[4858]: I1122 09:36:55.898705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-config-data\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.000906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-public-tls-certs\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.001127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-config-data\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.001179 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-scripts\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.001281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2fvb\" (UniqueName: \"kubernetes.io/projected/555e309c-8c41-4ac1-8eca-60e203f92e4e-kube-api-access-d2fvb\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.001435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.001507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-internal-tls-certs\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.006017 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.006085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-scripts\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.007116 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-public-tls-certs\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.007195 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-internal-tls-certs\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.008030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-config-data\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.022216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2fvb\" (UniqueName: \"kubernetes.io/projected/555e309c-8c41-4ac1-8eca-60e203f92e4e-kube-api-access-d2fvb\") pod \"aodh-0\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.070396 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:36:56 crc kubenswrapper[4858]: W1122 09:36:56.587710 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555e309c_8c41_4ac1_8eca_60e203f92e4e.slice/crio-67d7dacb9dd84b77bc7123d14218c050d3f94077f543d53dc2bdb195e13800a6 WatchSource:0}: Error finding container 67d7dacb9dd84b77bc7123d14218c050d3f94077f543d53dc2bdb195e13800a6: Status 404 returned error can't find the container with id 67d7dacb9dd84b77bc7123d14218c050d3f94077f543d53dc2bdb195e13800a6 Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.588213 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 09:36:56 crc kubenswrapper[4858]: I1122 09:36:56.682031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerStarted","Data":"67d7dacb9dd84b77bc7123d14218c050d3f94077f543d53dc2bdb195e13800a6"} Nov 22 09:36:57 crc kubenswrapper[4858]: I1122 09:36:57.547218 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f95faa-8695-4bd8-9c38-9cb92f778f5c" path="/var/lib/kubelet/pods/14f95faa-8695-4bd8-9c38-9cb92f778f5c/volumes" Nov 22 09:36:57 crc kubenswrapper[4858]: I1122 09:36:57.703652 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerStarted","Data":"5e0c7f07b403939e0b30d379cfb2f6f7c0e0f0331d4da8acd1e935938d2cf0d3"} Nov 22 09:36:57 crc kubenswrapper[4858]: I1122 09:36:57.703707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerStarted","Data":"0e440c8b36113cc42b6ddd774ee75f89f04beccb033e5b4e3d7827901f46cf17"} Nov 22 09:36:58 crc kubenswrapper[4858]: I1122 09:36:58.720543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerStarted","Data":"3967676f8e95adb5ee5014b410d8fa6ed22970b37607a556cdda336ed986c928"} Nov 22 09:36:58 crc kubenswrapper[4858]: I1122 09:36:58.723167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerStarted","Data":"87056ef5db220c131bb3ec20fed1d41cb562684d629666af50d9c09b8a77410d"} Nov 22 09:36:58 crc kubenswrapper[4858]: I1122 09:36:58.763488 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.334178176 podStartE2EDuration="3.763456067s" podCreationTimestamp="2025-11-22 09:36:55 +0000 UTC" firstStartedPulling="2025-11-22 09:36:56.590788586 +0000 UTC m=+8778.432211602" lastFinishedPulling="2025-11-22 09:36:58.020066487 +0000 UTC m=+8779.861489493" observedRunningTime="2025-11-22 09:36:58.745495833 +0000 UTC m=+8780.586918899" watchObservedRunningTime="2025-11-22 09:36:58.763456067 +0000 UTC m=+8780.604879113" Nov 22 09:36:59 crc kubenswrapper[4858]: I1122 09:36:59.547205 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:36:59 crc kubenswrapper[4858]: E1122 09:36:59.547549 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:36:59 crc kubenswrapper[4858]: I1122 09:36:59.772061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 09:37:12 crc kubenswrapper[4858]: I1122 09:37:12.536208 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:37:12 crc kubenswrapper[4858]: E1122 09:37:12.537040 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:37:13 crc kubenswrapper[4858]: I1122 09:37:13.047931 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8911-account-create-scn9n"] Nov 22 09:37:13 crc kubenswrapper[4858]: I1122 09:37:13.056154 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8911-account-create-scn9n"] Nov 22 09:37:13 crc kubenswrapper[4858]: I1122 09:37:13.547134 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612fc455-b33b-48db-9146-0e99a8f7dd73" path="/var/lib/kubelet/pods/612fc455-b33b-48db-9146-0e99a8f7dd73/volumes" Nov 22 09:37:14 crc kubenswrapper[4858]: I1122 09:37:14.026286 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sgm8k"] Nov 22 09:37:14 crc kubenswrapper[4858]: I1122 09:37:14.035764 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sgm8k"] Nov 22 09:37:15 crc kubenswrapper[4858]: I1122 09:37:15.549423 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e" path="/var/lib/kubelet/pods/08a86cb6-d9b2-4bcc-8c0f-3081c4c1c19e/volumes" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.528852 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q4czh"] Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.532701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.537192 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q4czh"] Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.670104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcgbt\" (UniqueName: \"kubernetes.io/projected/6eff26ce-8f33-410b-9212-f256dc60452e-kube-api-access-dcgbt\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.670152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-utilities\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.670302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-catalog-content\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.772134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-catalog-content\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.772648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-catalog-content\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.772852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcgbt\" (UniqueName: \"kubernetes.io/projected/6eff26ce-8f33-410b-9212-f256dc60452e-kube-api-access-dcgbt\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.772881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-utilities\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:20 crc kubenswrapper[4858]: I1122 09:37:20.773223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-utilities\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:21 crc kubenswrapper[4858]: I1122 09:37:21.265858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcgbt\" (UniqueName: \"kubernetes.io/projected/6eff26ce-8f33-410b-9212-f256dc60452e-kube-api-access-dcgbt\") pod \"community-operators-q4czh\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:21 crc kubenswrapper[4858]: I1122 09:37:21.455816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:21 crc kubenswrapper[4858]: I1122 09:37:21.958944 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q4czh"] Nov 22 09:37:21 crc kubenswrapper[4858]: I1122 09:37:21.994016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerStarted","Data":"d12d54611c85b206856ad52d473a95a1e058bee0bbc030c918a9a8089840f4e1"} Nov 22 09:37:23 crc kubenswrapper[4858]: I1122 09:37:23.006574 4858 generic.go:334] "Generic (PLEG): container finished" podID="6eff26ce-8f33-410b-9212-f256dc60452e" containerID="647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16" exitCode=0 Nov 22 09:37:23 crc kubenswrapper[4858]: I1122 09:37:23.006668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerDied","Data":"647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16"} Nov 22 09:37:23 crc kubenswrapper[4858]: I1122 09:37:23.039974 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dtwr9"] Nov 22 09:37:23 crc kubenswrapper[4858]: I1122 09:37:23.063037 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dtwr9"] Nov 22 09:37:23 crc kubenswrapper[4858]: I1122 09:37:23.548413 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e60d88-c1dc-4437-8156-2fa02492e68d" path="/var/lib/kubelet/pods/16e60d88-c1dc-4437-8156-2fa02492e68d/volumes" Nov 22 09:37:25 crc kubenswrapper[4858]: I1122 09:37:25.031160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerStarted","Data":"de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5"} Nov 22 09:37:26 crc kubenswrapper[4858]: I1122 09:37:26.535470 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:37:26 crc kubenswrapper[4858]: E1122 09:37:26.536127 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:37:27 crc kubenswrapper[4858]: I1122 09:37:27.063795 4858 generic.go:334] "Generic (PLEG): container finished" podID="6eff26ce-8f33-410b-9212-f256dc60452e" containerID="de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5" exitCode=0 Nov 22 09:37:27 crc kubenswrapper[4858]: I1122 09:37:27.063842 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerDied","Data":"de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5"} Nov 22 09:37:28 crc kubenswrapper[4858]: I1122 09:37:28.077963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerStarted","Data":"b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69"} Nov 22 09:37:28 crc kubenswrapper[4858]: I1122 09:37:28.101519 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q4czh" podStartSLOduration=3.626706828 podStartE2EDuration="8.101503012s" podCreationTimestamp="2025-11-22 09:37:20 +0000 UTC" firstStartedPulling="2025-11-22 09:37:23.009243467 +0000 UTC m=+8804.850666473" lastFinishedPulling="2025-11-22 09:37:27.484039651 +0000 UTC m=+8809.325462657" observedRunningTime="2025-11-22 09:37:28.096284385 +0000 UTC m=+8809.937707391" watchObservedRunningTime="2025-11-22 09:37:28.101503012 +0000 UTC m=+8809.942926018" Nov 22 09:37:31 crc kubenswrapper[4858]: I1122 09:37:31.455978 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:31 crc kubenswrapper[4858]: I1122 09:37:31.456628 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:31 crc kubenswrapper[4858]: I1122 09:37:31.517428 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:32 crc kubenswrapper[4858]: I1122 09:37:32.156504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:32 crc kubenswrapper[4858]: I1122 09:37:32.206529 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q4czh"] Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.128291 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q4czh" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="registry-server" containerID="cri-o://b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69" gracePeriod=2 Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.634005 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.782912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-catalog-content\") pod \"6eff26ce-8f33-410b-9212-f256dc60452e\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.782993 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcgbt\" (UniqueName: \"kubernetes.io/projected/6eff26ce-8f33-410b-9212-f256dc60452e-kube-api-access-dcgbt\") pod \"6eff26ce-8f33-410b-9212-f256dc60452e\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.783171 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-utilities\") pod \"6eff26ce-8f33-410b-9212-f256dc60452e\" (UID: \"6eff26ce-8f33-410b-9212-f256dc60452e\") " Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.785550 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-utilities" (OuterVolumeSpecName: "utilities") pod "6eff26ce-8f33-410b-9212-f256dc60452e" (UID: "6eff26ce-8f33-410b-9212-f256dc60452e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.788605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eff26ce-8f33-410b-9212-f256dc60452e-kube-api-access-dcgbt" (OuterVolumeSpecName: "kube-api-access-dcgbt") pod "6eff26ce-8f33-410b-9212-f256dc60452e" (UID: "6eff26ce-8f33-410b-9212-f256dc60452e"). InnerVolumeSpecName "kube-api-access-dcgbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.847300 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eff26ce-8f33-410b-9212-f256dc60452e" (UID: "6eff26ce-8f33-410b-9212-f256dc60452e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.885312 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.885375 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eff26ce-8f33-410b-9212-f256dc60452e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:34 crc kubenswrapper[4858]: I1122 09:37:34.885393 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcgbt\" (UniqueName: \"kubernetes.io/projected/6eff26ce-8f33-410b-9212-f256dc60452e-kube-api-access-dcgbt\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.140787 4858 generic.go:334] "Generic (PLEG): container finished" podID="6eff26ce-8f33-410b-9212-f256dc60452e" containerID="b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69" exitCode=0 Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.140846 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerDied","Data":"b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69"} Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.140925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4czh" event={"ID":"6eff26ce-8f33-410b-9212-f256dc60452e","Type":"ContainerDied","Data":"d12d54611c85b206856ad52d473a95a1e058bee0bbc030c918a9a8089840f4e1"} Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.140956 4858 scope.go:117] "RemoveContainer" containerID="b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.140872 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4czh" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.190301 4858 scope.go:117] "RemoveContainer" containerID="de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.201960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q4czh"] Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.232489 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q4czh"] Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.255583 4858 scope.go:117] "RemoveContainer" containerID="647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.281199 4858 scope.go:117] "RemoveContainer" containerID="b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69" Nov 22 09:37:35 crc kubenswrapper[4858]: E1122 09:37:35.281803 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69\": container with ID starting with b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69 not found: ID does not exist" containerID="b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.281863 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69"} err="failed to get container status \"b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69\": rpc error: code = NotFound desc = could not find container \"b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69\": container with ID starting with b4dd176cd36fb3db0eb34f3095b24063de0aca1e7d9801adba6fcc63d6ea0b69 not found: ID does not exist" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.281893 4858 scope.go:117] "RemoveContainer" containerID="de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5" Nov 22 09:37:35 crc kubenswrapper[4858]: E1122 09:37:35.282375 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5\": container with ID starting with de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5 not found: ID does not exist" containerID="de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.282415 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5"} err="failed to get container status \"de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5\": rpc error: code = NotFound desc = could not find container \"de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5\": container with ID starting with de716ba8719abad87affdfeed97674b2be7ff582b37eca7e5e0bd43a24ba36d5 not found: ID does not exist" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.282441 4858 scope.go:117] "RemoveContainer" containerID="647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16" Nov 22 09:37:35 crc kubenswrapper[4858]: E1122 09:37:35.282758 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16\": container with ID starting with 647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16 not found: ID does not exist" containerID="647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.282778 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16"} err="failed to get container status \"647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16\": rpc error: code = NotFound desc = could not find container \"647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16\": container with ID starting with 647795c016ed8de65a2cdc2888926c8c822a1731908ca149e8b9bc8e7d645b16 not found: ID does not exist" Nov 22 09:37:35 crc kubenswrapper[4858]: I1122 09:37:35.554067 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" path="/var/lib/kubelet/pods/6eff26ce-8f33-410b-9212-f256dc60452e/volumes" Nov 22 09:37:37 crc kubenswrapper[4858]: I1122 09:37:37.535972 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:37:37 crc kubenswrapper[4858]: E1122 09:37:37.536641 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.342089 4858 scope.go:117] "RemoveContainer" containerID="5c74ad98a26b5abf2505019b8830b75720846382cae9001a96ee50c0ef31e4c6" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.372107 4858 scope.go:117] "RemoveContainer" containerID="26fccc2831fe719dcae1f491e8756664bbc10b9296a7e26511127930fb850eb3" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.390267 4858 scope.go:117] "RemoveContainer" containerID="b21c85e551fc5e0dc3400d220584dcb594a68ce78dcf67ac5c576b6a5cab55d1" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.589078 4858 scope.go:117] "RemoveContainer" containerID="e2c5a022611c44ce3cbed51b69c30c11acf8f759b76b14b619abf808be7ffc30" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.627097 4858 scope.go:117] "RemoveContainer" containerID="6964aeee339eb70d00d139334408f099093bdc3299e0ca665e1767eaf0192b7e" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.724638 4858 scope.go:117] "RemoveContainer" containerID="501193bba928fcf19597d861908ba432fbbe9c34e5311d048351828fa311d0cd" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.753363 4858 scope.go:117] "RemoveContainer" containerID="a1b8f2ccfd39761ee60ef25f03ac3aaca90b8092dda767f2bd5a5ea9a13d2e9e" Nov 22 09:37:40 crc kubenswrapper[4858]: I1122 09:37:40.952453 4858 scope.go:117] "RemoveContainer" containerID="6c38eb81f75567f8fdaad6f8bbc06fb2f53e4c4382721b00a34f24280b871b64" Nov 22 09:37:49 crc kubenswrapper[4858]: I1122 09:37:49.543801 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:37:49 crc kubenswrapper[4858]: E1122 09:37:49.544880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:38:02 crc kubenswrapper[4858]: I1122 09:38:02.538002 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:38:02 crc kubenswrapper[4858]: E1122 09:38:02.539093 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:38:14 crc kubenswrapper[4858]: I1122 09:38:14.536221 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:38:14 crc kubenswrapper[4858]: E1122 09:38:14.537080 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.677666 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dxzqt"] Nov 22 09:38:20 crc kubenswrapper[4858]: E1122 09:38:20.678864 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="extract-utilities" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.678891 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="extract-utilities" Nov 22 09:38:20 crc kubenswrapper[4858]: E1122 09:38:20.678936 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="extract-content" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.678948 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="extract-content" Nov 22 09:38:20 crc kubenswrapper[4858]: E1122 09:38:20.678984 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="registry-server" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.678995 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="registry-server" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.679344 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eff26ce-8f33-410b-9212-f256dc60452e" containerName="registry-server" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.682111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.691166 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxzqt"] Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.843378 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-utilities\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.843473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-catalog-content\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.843542 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhf6\" (UniqueName: \"kubernetes.io/projected/7516c723-95c1-4049-b731-094744f60fa8-kube-api-access-ddhf6\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.946005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-utilities\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.946108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-catalog-content\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.946185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhf6\" (UniqueName: \"kubernetes.io/projected/7516c723-95c1-4049-b731-094744f60fa8-kube-api-access-ddhf6\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.946624 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-utilities\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.946734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-catalog-content\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:20 crc kubenswrapper[4858]: I1122 09:38:20.974760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhf6\" (UniqueName: \"kubernetes.io/projected/7516c723-95c1-4049-b731-094744f60fa8-kube-api-access-ddhf6\") pod \"redhat-marketplace-dxzqt\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:21 crc kubenswrapper[4858]: I1122 09:38:21.017546 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:21 crc kubenswrapper[4858]: I1122 09:38:21.525279 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxzqt"] Nov 22 09:38:21 crc kubenswrapper[4858]: I1122 09:38:21.673232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxzqt" event={"ID":"7516c723-95c1-4049-b731-094744f60fa8","Type":"ContainerStarted","Data":"8179a84c56cae98ff0f2cf9c43c92aa30c85dd1dedfe1eaa6d1b2714ea92c9cf"} Nov 22 09:38:22 crc kubenswrapper[4858]: I1122 09:38:22.690427 4858 generic.go:334] "Generic (PLEG): container finished" podID="7516c723-95c1-4049-b731-094744f60fa8" containerID="147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b" exitCode=0 Nov 22 09:38:22 crc kubenswrapper[4858]: I1122 09:38:22.690501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxzqt" event={"ID":"7516c723-95c1-4049-b731-094744f60fa8","Type":"ContainerDied","Data":"147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b"} Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.048912 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4pspr"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.060568 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mrxcb"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.076835 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7lgn8"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.087364 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4pspr"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.096058 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-04c7-account-create-dlvt4"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.104437 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7lgn8"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.112139 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mrxcb"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.123285 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-04c7-account-create-dlvt4"] Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.714408 4858 generic.go:334] "Generic (PLEG): container finished" podID="7516c723-95c1-4049-b731-094744f60fa8" containerID="80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79" exitCode=0 Nov 22 09:38:24 crc kubenswrapper[4858]: I1122 09:38:24.714468 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxzqt" event={"ID":"7516c723-95c1-4049-b731-094744f60fa8","Type":"ContainerDied","Data":"80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79"} Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.038437 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f3b1-account-create-sp5kt"] Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.058735 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b969-account-create-5fcl7"] Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.073556 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f3b1-account-create-sp5kt"] Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.080222 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b969-account-create-5fcl7"] Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.550132 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088a68f9-5376-412d-a96d-8f16ecf6a850" path="/var/lib/kubelet/pods/088a68f9-5376-412d-a96d-8f16ecf6a850/volumes" Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.551375 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c138cb2-52d6-4af4-947e-ab721fc2b04d" path="/var/lib/kubelet/pods/3c138cb2-52d6-4af4-947e-ab721fc2b04d/volumes" Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.552570 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ca7ac5-bfc8-44b0-b0c5-7bb71992848a" path="/var/lib/kubelet/pods/89ca7ac5-bfc8-44b0-b0c5-7bb71992848a/volumes" Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.553795 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6894e36-c578-4bc1-99c3-96934f75664f" path="/var/lib/kubelet/pods/b6894e36-c578-4bc1-99c3-96934f75664f/volumes" Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.555858 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec81453b-74ee-4e47-9838-925ccbc8cace" path="/var/lib/kubelet/pods/ec81453b-74ee-4e47-9838-925ccbc8cace/volumes" Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.556949 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09c5150-3176-431c-a614-589a67efffa0" path="/var/lib/kubelet/pods/f09c5150-3176-431c-a614-589a67efffa0/volumes" Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.726746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxzqt" event={"ID":"7516c723-95c1-4049-b731-094744f60fa8","Type":"ContainerStarted","Data":"a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131"} Nov 22 09:38:25 crc kubenswrapper[4858]: I1122 09:38:25.747553 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dxzqt" podStartSLOduration=3.318087248 podStartE2EDuration="5.747525737s" podCreationTimestamp="2025-11-22 09:38:20 +0000 UTC" firstStartedPulling="2025-11-22 09:38:22.694001436 +0000 UTC m=+8864.535424442" lastFinishedPulling="2025-11-22 09:38:25.123439915 +0000 UTC m=+8866.964862931" observedRunningTime="2025-11-22 09:38:25.740972947 +0000 UTC m=+8867.582396013" watchObservedRunningTime="2025-11-22 09:38:25.747525737 +0000 UTC m=+8867.588948783" Nov 22 09:38:27 crc kubenswrapper[4858]: I1122 09:38:27.536630 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:38:27 crc kubenswrapper[4858]: E1122 09:38:27.537108 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:38:31 crc kubenswrapper[4858]: I1122 09:38:31.018156 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:31 crc kubenswrapper[4858]: I1122 09:38:31.018605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:31 crc kubenswrapper[4858]: I1122 09:38:31.101811 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:31 crc kubenswrapper[4858]: I1122 09:38:31.860999 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:31 crc kubenswrapper[4858]: I1122 09:38:31.922936 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxzqt"] Nov 22 09:38:33 crc kubenswrapper[4858]: I1122 09:38:33.799960 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dxzqt" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="registry-server" containerID="cri-o://a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131" gracePeriod=2 Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.278765 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.332860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-catalog-content\") pod \"7516c723-95c1-4049-b731-094744f60fa8\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.333066 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddhf6\" (UniqueName: \"kubernetes.io/projected/7516c723-95c1-4049-b731-094744f60fa8-kube-api-access-ddhf6\") pod \"7516c723-95c1-4049-b731-094744f60fa8\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.333095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-utilities\") pod \"7516c723-95c1-4049-b731-094744f60fa8\" (UID: \"7516c723-95c1-4049-b731-094744f60fa8\") " Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.333893 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-utilities" (OuterVolumeSpecName: "utilities") pod "7516c723-95c1-4049-b731-094744f60fa8" (UID: "7516c723-95c1-4049-b731-094744f60fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.339240 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7516c723-95c1-4049-b731-094744f60fa8-kube-api-access-ddhf6" (OuterVolumeSpecName: "kube-api-access-ddhf6") pod "7516c723-95c1-4049-b731-094744f60fa8" (UID: "7516c723-95c1-4049-b731-094744f60fa8"). InnerVolumeSpecName "kube-api-access-ddhf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.350455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7516c723-95c1-4049-b731-094744f60fa8" (UID: "7516c723-95c1-4049-b731-094744f60fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.435626 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.435659 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddhf6\" (UniqueName: \"kubernetes.io/projected/7516c723-95c1-4049-b731-094744f60fa8-kube-api-access-ddhf6\") on node \"crc\" DevicePath \"\"" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.435671 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7516c723-95c1-4049-b731-094744f60fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.817521 4858 generic.go:334] "Generic (PLEG): container finished" podID="7516c723-95c1-4049-b731-094744f60fa8" containerID="a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131" exitCode=0 Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.817639 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxzqt" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.817645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxzqt" event={"ID":"7516c723-95c1-4049-b731-094744f60fa8","Type":"ContainerDied","Data":"a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131"} Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.817717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxzqt" event={"ID":"7516c723-95c1-4049-b731-094744f60fa8","Type":"ContainerDied","Data":"8179a84c56cae98ff0f2cf9c43c92aa30c85dd1dedfe1eaa6d1b2714ea92c9cf"} Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.817748 4858 scope.go:117] "RemoveContainer" containerID="a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.842203 4858 scope.go:117] "RemoveContainer" containerID="80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79" Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.867759 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxzqt"] Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.877669 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxzqt"] Nov 22 09:38:34 crc kubenswrapper[4858]: I1122 09:38:34.979712 4858 scope.go:117] "RemoveContainer" containerID="147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.048301 4858 scope.go:117] "RemoveContainer" containerID="a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131" Nov 22 09:38:35 crc kubenswrapper[4858]: E1122 09:38:35.049696 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131\": container with ID starting with a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131 not found: ID does not exist" containerID="a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.049729 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131"} err="failed to get container status \"a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131\": rpc error: code = NotFound desc = could not find container \"a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131\": container with ID starting with a58bb50e7cc0d2cd8fba42417add3145ab6a7e3f9d6225a82ec88ef001674131 not found: ID does not exist" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.049752 4858 scope.go:117] "RemoveContainer" containerID="80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79" Nov 22 09:38:35 crc kubenswrapper[4858]: E1122 09:38:35.050291 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79\": container with ID starting with 80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79 not found: ID does not exist" containerID="80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.050319 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79"} err="failed to get container status \"80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79\": rpc error: code = NotFound desc = could not find container \"80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79\": container with ID starting with 80196d3d34847d9942dceb30a70cb41ef2ca80bd619494727a0ade034edd7d79 not found: ID does not exist" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.050353 4858 scope.go:117] "RemoveContainer" containerID="147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b" Nov 22 09:38:35 crc kubenswrapper[4858]: E1122 09:38:35.051013 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b\": container with ID starting with 147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b not found: ID does not exist" containerID="147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.051040 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b"} err="failed to get container status \"147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b\": rpc error: code = NotFound desc = could not find container \"147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b\": container with ID starting with 147ea58c3d3ad77023dd0e756875148f10da5836e11c5cece9e3237d8eb67d0b not found: ID does not exist" Nov 22 09:38:35 crc kubenswrapper[4858]: I1122 09:38:35.550108 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7516c723-95c1-4049-b731-094744f60fa8" path="/var/lib/kubelet/pods/7516c723-95c1-4049-b731-094744f60fa8/volumes" Nov 22 09:38:40 crc kubenswrapper[4858]: I1122 09:38:40.536196 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:38:40 crc kubenswrapper[4858]: E1122 09:38:40.537243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:38:41 crc kubenswrapper[4858]: I1122 09:38:41.167463 4858 scope.go:117] "RemoveContainer" containerID="fa261e5f5e7bf93982a936f9462835ed7ef7e4000e4767a93010bc445fed7d9d" Nov 22 09:38:41 crc kubenswrapper[4858]: I1122 09:38:41.216215 4858 scope.go:117] "RemoveContainer" containerID="87f92f33e6d1e62dc706dd981728716dbf374263266ff4176ffaec2b543c49e6" Nov 22 09:38:41 crc kubenswrapper[4858]: I1122 09:38:41.251562 4858 scope.go:117] "RemoveContainer" containerID="818543460bf30055b03bcfcd4e0fd8f5f8cf6c09019920f1e66c4835464c5074" Nov 22 09:38:41 crc kubenswrapper[4858]: I1122 09:38:41.293511 4858 scope.go:117] "RemoveContainer" containerID="b94c43098792e1d88c3c02839fab03f0eddb71650538bad8222d4d3fbbedd2a8" Nov 22 09:38:41 crc kubenswrapper[4858]: I1122 09:38:41.356875 4858 scope.go:117] "RemoveContainer" containerID="aaaeab0be14d1b4b62adad21c31ba1af3135e33defce9771a58a19398910bf8b" Nov 22 09:38:41 crc kubenswrapper[4858]: I1122 09:38:41.404021 4858 scope.go:117] "RemoveContainer" containerID="b6b7f8952a75ad432a0641e6296003887ef48941795dc9f8214df1749acef0d4" Nov 22 09:38:43 crc kubenswrapper[4858]: I1122 09:38:43.048066 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbl5d"] Nov 22 09:38:43 crc kubenswrapper[4858]: I1122 09:38:43.058306 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbl5d"] Nov 22 09:38:43 crc kubenswrapper[4858]: I1122 09:38:43.552262 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124066f7-9a16-4d81-a897-e7b47ef06710" path="/var/lib/kubelet/pods/124066f7-9a16-4d81-a897-e7b47ef06710/volumes" Nov 22 09:38:53 crc kubenswrapper[4858]: I1122 09:38:53.540058 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:38:53 crc kubenswrapper[4858]: E1122 09:38:53.540979 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:38:58 crc kubenswrapper[4858]: I1122 09:38:58.058112 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rrbtg"] Nov 22 09:38:58 crc kubenswrapper[4858]: I1122 09:38:58.076349 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rrbtg"] Nov 22 09:38:59 crc kubenswrapper[4858]: I1122 09:38:59.550980 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dae4660-c997-42c2-8b43-1184edd7388e" path="/var/lib/kubelet/pods/2dae4660-c997-42c2-8b43-1184edd7388e/volumes" Nov 22 09:39:00 crc kubenswrapper[4858]: I1122 09:39:00.044253 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dgz4c"] Nov 22 09:39:00 crc kubenswrapper[4858]: I1122 09:39:00.055813 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dgz4c"] Nov 22 09:39:01 crc kubenswrapper[4858]: I1122 09:39:01.558681 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adcfc0c9-3a34-4a25-bea5-4015b6c70880" path="/var/lib/kubelet/pods/adcfc0c9-3a34-4a25-bea5-4015b6c70880/volumes" Nov 22 09:39:04 crc kubenswrapper[4858]: I1122 09:39:04.535809 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:39:04 crc kubenswrapper[4858]: E1122 09:39:04.536640 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:39:19 crc kubenswrapper[4858]: I1122 09:39:19.541444 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:39:19 crc kubenswrapper[4858]: E1122 09:39:19.542609 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.567930 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gr6lp"] Nov 22 09:39:31 crc kubenswrapper[4858]: E1122 09:39:31.570492 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="extract-utilities" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.570516 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="extract-utilities" Nov 22 09:39:31 crc kubenswrapper[4858]: E1122 09:39:31.570555 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="extract-content" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.570564 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="extract-content" Nov 22 09:39:31 crc kubenswrapper[4858]: E1122 09:39:31.570597 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="registry-server" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.570607 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="registry-server" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.570880 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7516c723-95c1-4049-b731-094744f60fa8" containerName="registry-server" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.573106 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.586253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr6lp"] Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.711586 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-utilities\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.711666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjjr\" (UniqueName: \"kubernetes.io/projected/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-kube-api-access-kpjjr\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.711902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-catalog-content\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.813844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-catalog-content\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.814003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-utilities\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.814031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpjjr\" (UniqueName: \"kubernetes.io/projected/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-kube-api-access-kpjjr\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.814331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-catalog-content\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.814392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-utilities\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.835370 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpjjr\" (UniqueName: \"kubernetes.io/projected/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-kube-api-access-kpjjr\") pod \"redhat-operators-gr6lp\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:31 crc kubenswrapper[4858]: I1122 09:39:31.904496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:32 crc kubenswrapper[4858]: I1122 09:39:32.444054 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr6lp"] Nov 22 09:39:33 crc kubenswrapper[4858]: I1122 09:39:33.426005 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerID="85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1" exitCode=0 Nov 22 09:39:33 crc kubenswrapper[4858]: I1122 09:39:33.426126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerDied","Data":"85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1"} Nov 22 09:39:33 crc kubenswrapper[4858]: I1122 09:39:33.426345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerStarted","Data":"d6caa434ae59e647ef258c4c9b8c1986e49dd5b80ae4fcadc75d3bc2c05689fc"} Nov 22 09:39:34 crc kubenswrapper[4858]: I1122 09:39:34.439460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerStarted","Data":"e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779"} Nov 22 09:39:34 crc kubenswrapper[4858]: I1122 09:39:34.535965 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:39:34 crc kubenswrapper[4858]: E1122 09:39:34.536463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qkh9t_openshift-machine-config-operator(4ac3f217-ad73-4e89-b703-b42a3c6c9ed4)\"" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.393872 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.394632 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" containerName="openstackclient" containerID="cri-o://89f5cb55d37c396ec6fc110b271605257fc3966e0a587600d0b34d9feee774c6" gracePeriod=2 Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.412769 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.458248 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.526044 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerID="e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779" exitCode=0 Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.526088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerDied","Data":"e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779"} Nov 22 09:39:40 crc kubenswrapper[4858]: E1122 09:39:40.622435 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 09:39:40 crc kubenswrapper[4858]: E1122 09:39:40.622488 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data podName:59060e41-09d2-4441-8563-5302fd77a52d nodeName:}" failed. No retries permitted until 2025-11-22 09:39:41.122474733 +0000 UTC m=+8942.963897739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data") pod "rabbitmq-server-0" (UID: "59060e41-09d2-4441-8563-5302fd77a52d") : configmap "rabbitmq-config-data" not found Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.751970 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:39:40 crc kubenswrapper[4858]: E1122 09:39:40.836439 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:40 crc kubenswrapper[4858]: E1122 09:39:40.836493 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data podName:e8384e15-b249-44a6-8d35-8a2066b3da7b nodeName:}" failed. No retries permitted until 2025-11-22 09:39:41.336479111 +0000 UTC m=+8943.177902117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.905957 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cindera285-account-delete-9xdwn"] Nov 22 09:39:40 crc kubenswrapper[4858]: E1122 09:39:40.906413 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" containerName="openstackclient" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.906428 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" containerName="openstackclient" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.906679 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" containerName="openstackclient" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.907340 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.925673 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron6b93-account-delete-n54tn"] Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.927367 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.937388 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cindera285-account-delete-9xdwn"] Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.938779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/938520b5-d4e9-489e-8f92-642c144d69bc-operator-scripts\") pod \"cindera285-account-delete-9xdwn\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.938921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49dz\" (UniqueName: \"kubernetes.io/projected/938520b5-d4e9-489e-8f92-642c144d69bc-kube-api-access-l49dz\") pod \"cindera285-account-delete-9xdwn\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:40 crc kubenswrapper[4858]: I1122 09:39:40.949806 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron6b93-account-delete-n54tn"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.043543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr565\" (UniqueName: \"kubernetes.io/projected/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-kube-api-access-gr565\") pod \"neutron6b93-account-delete-n54tn\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.043628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-operator-scripts\") pod \"neutron6b93-account-delete-n54tn\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.043671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/938520b5-d4e9-489e-8f92-642c144d69bc-operator-scripts\") pod \"cindera285-account-delete-9xdwn\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.043780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l49dz\" (UniqueName: \"kubernetes.io/projected/938520b5-d4e9-489e-8f92-642c144d69bc-kube-api-access-l49dz\") pod \"cindera285-account-delete-9xdwn\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.044755 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/938520b5-d4e9-489e-8f92-642c144d69bc-operator-scripts\") pod \"cindera285-account-delete-9xdwn\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.102497 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican04c4-account-delete-8pfbj"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.138162 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.147505 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican04c4-account-delete-8pfbj"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.172155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr565\" (UniqueName: \"kubernetes.io/projected/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-kube-api-access-gr565\") pod \"neutron6b93-account-delete-n54tn\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: E1122 09:39:41.172391 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 09:39:41 crc kubenswrapper[4858]: E1122 09:39:41.172440 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data podName:59060e41-09d2-4441-8563-5302fd77a52d nodeName:}" failed. No retries permitted until 2025-11-22 09:39:42.172426393 +0000 UTC m=+8944.013849399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data") pod "rabbitmq-server-0" (UID: "59060e41-09d2-4441-8563-5302fd77a52d") : configmap "rabbitmq-config-data" not found Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.172611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-operator-scripts\") pod \"neutron6b93-account-delete-n54tn\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.173979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-operator-scripts\") pod \"neutron6b93-account-delete-n54tn\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.199647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49dz\" (UniqueName: \"kubernetes.io/projected/938520b5-d4e9-489e-8f92-642c144d69bc-kube-api-access-l49dz\") pod \"cindera285-account-delete-9xdwn\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.221124 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance034d-account-delete-lrkfs"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.225469 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.228068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr565\" (UniqueName: \"kubernetes.io/projected/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-kube-api-access-gr565\") pod \"neutron6b93-account-delete-n54tn\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.228532 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.252915 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.254914 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance034d-account-delete-lrkfs"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.278726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/084a15f9-e534-46ad-b38a-17eeb1b6589e-operator-scripts\") pod \"glance034d-account-delete-lrkfs\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.278767 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvck2\" (UniqueName: \"kubernetes.io/projected/084a15f9-e534-46ad-b38a-17eeb1b6589e-kube-api-access-fvck2\") pod \"glance034d-account-delete-lrkfs\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.278798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfbh9\" (UniqueName: \"kubernetes.io/projected/7c014721-aa5e-4b1e-93b7-36b6832df6c6-kube-api-access-hfbh9\") pod \"barbican04c4-account-delete-8pfbj\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.278838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c014721-aa5e-4b1e-93b7-36b6832df6c6-operator-scripts\") pod \"barbican04c4-account-delete-8pfbj\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.355568 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heataca4-account-delete-65j4m"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.356903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.380675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfbh9\" (UniqueName: \"kubernetes.io/projected/7c014721-aa5e-4b1e-93b7-36b6832df6c6-kube-api-access-hfbh9\") pod \"barbican04c4-account-delete-8pfbj\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.380744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c014721-aa5e-4b1e-93b7-36b6832df6c6-operator-scripts\") pod \"barbican04c4-account-delete-8pfbj\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.380945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/084a15f9-e534-46ad-b38a-17eeb1b6589e-operator-scripts\") pod \"glance034d-account-delete-lrkfs\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.380969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvck2\" (UniqueName: \"kubernetes.io/projected/084a15f9-e534-46ad-b38a-17eeb1b6589e-kube-api-access-fvck2\") pod \"glance034d-account-delete-lrkfs\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: E1122 09:39:41.381581 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:41 crc kubenswrapper[4858]: E1122 09:39:41.381652 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data podName:e8384e15-b249-44a6-8d35-8a2066b3da7b nodeName:}" failed. No retries permitted until 2025-11-22 09:39:42.381634578 +0000 UTC m=+8944.223057584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.383817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c014721-aa5e-4b1e-93b7-36b6832df6c6-operator-scripts\") pod \"barbican04c4-account-delete-8pfbj\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.385247 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/084a15f9-e534-46ad-b38a-17eeb1b6589e-operator-scripts\") pod \"glance034d-account-delete-lrkfs\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.401387 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heataca4-account-delete-65j4m"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.420847 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvck2\" (UniqueName: \"kubernetes.io/projected/084a15f9-e534-46ad-b38a-17eeb1b6589e-kube-api-access-fvck2\") pod \"glance034d-account-delete-lrkfs\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.425969 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfbh9\" (UniqueName: \"kubernetes.io/projected/7c014721-aa5e-4b1e-93b7-36b6832df6c6-kube-api-access-hfbh9\") pod \"barbican04c4-account-delete-8pfbj\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.464371 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.479532 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.496175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts\") pod \"heataca4-account-delete-65j4m\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.496367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx48r\" (UniqueName: \"kubernetes.io/projected/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-kube-api-access-cx48r\") pod \"heataca4-account-delete-65j4m\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.507016 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement8911-account-delete-lsr9n"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.508482 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.519427 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement8911-account-delete-lsr9n"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.583760 4858 scope.go:117] "RemoveContainer" containerID="02301d2e275ba72fe89e7d7c44b31ea1dd7a3ed37c733ad98c9d4ad774b46f99" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.599342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx48r\" (UniqueName: \"kubernetes.io/projected/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-kube-api-access-cx48r\") pod \"heataca4-account-delete-65j4m\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.599491 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxd8z\" (UniqueName: \"kubernetes.io/projected/1c0f4278-ebde-458a-85b5-9f95824cee1a-kube-api-access-nxd8z\") pod \"placement8911-account-delete-lsr9n\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.599581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0f4278-ebde-458a-85b5-9f95824cee1a-operator-scripts\") pod \"placement8911-account-delete-lsr9n\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.599614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts\") pod \"heataca4-account-delete-65j4m\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.600309 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts\") pod \"heataca4-account-delete-65j4m\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.651305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx48r\" (UniqueName: \"kubernetes.io/projected/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-kube-api-access-cx48r\") pod \"heataca4-account-delete-65j4m\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.684947 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi04c7-account-delete-q782b"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.686075 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi04c7-account-delete-q782b"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.686089 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell0b969-account-delete-2lntr"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.687120 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.687469 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.690885 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0b969-account-delete-2lntr"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.701472 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxd8z\" (UniqueName: \"kubernetes.io/projected/1c0f4278-ebde-458a-85b5-9f95824cee1a-kube-api-access-nxd8z\") pod \"placement8911-account-delete-lsr9n\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.701547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0f4278-ebde-458a-85b5-9f95824cee1a-operator-scripts\") pod \"placement8911-account-delete-lsr9n\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.713043 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0f4278-ebde-458a-85b5-9f95824cee1a-operator-scripts\") pod \"placement8911-account-delete-lsr9n\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.732160 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.732435 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" containerID="cri-o://07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" gracePeriod=30 Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.732559 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="openstack-network-exporter" containerID="cri-o://67e1251b14e7b433d9d3a2e216ea575663775c6d203919cad452b0e46788dce2" gracePeriod=30 Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.739894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxd8z\" (UniqueName: \"kubernetes.io/projected/1c0f4278-ebde-458a-85b5-9f95824cee1a-kube-api-access-nxd8z\") pod \"placement8911-account-delete-lsr9n\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.803500 4858 scope.go:117] "RemoveContainer" containerID="9669e26c9c448a46b92ab9848eeaf897720ba792d47e71e1f0bd65eb66502bc0" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.805430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbj6\" (UniqueName: \"kubernetes.io/projected/eb9be543-7566-4423-b4ed-5d9596cf21a4-kube-api-access-prbj6\") pod \"novacell0b969-account-delete-2lntr\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.805485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46nnd\" (UniqueName: \"kubernetes.io/projected/15c7de97-b620-4e9b-8e17-27da546d6fb8-kube-api-access-46nnd\") pod \"novaapi04c7-account-delete-q782b\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.805505 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts\") pod \"novaapi04c7-account-delete-q782b\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.806062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts\") pod \"novacell0b969-account-delete-2lntr\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.807846 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.885892 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.886778 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="openstack-network-exporter" containerID="cri-o://81501abf74e9ba60e651cc176ffd7cdf6b825e1cbbf8a19b79bc21f69b3efd8e" gracePeriod=300 Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.908070 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbj6\" (UniqueName: \"kubernetes.io/projected/eb9be543-7566-4423-b4ed-5d9596cf21a4-kube-api-access-prbj6\") pod \"novacell0b969-account-delete-2lntr\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.908140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46nnd\" (UniqueName: \"kubernetes.io/projected/15c7de97-b620-4e9b-8e17-27da546d6fb8-kube-api-access-46nnd\") pod \"novaapi04c7-account-delete-q782b\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.908162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts\") pod \"novaapi04c7-account-delete-q782b\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.908228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts\") pod \"novacell0b969-account-delete-2lntr\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.909282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts\") pod \"novaapi04c7-account-delete-q782b\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.909862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts\") pod \"novacell0b969-account-delete-2lntr\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.912520 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.913074 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-1" podUID="ca469349-62e4-4ab0-bba0-66bc5d4c1956" containerName="openstack-network-exporter" containerID="cri-o://216b5090996fa3e09c1c2111cb8e5434b009b80a4f59aa7219be2938522a73c1" gracePeriod=300 Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.938857 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.939890 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-2" podUID="c7a007d2-4a0e-44bd-981f-8a56cbd45c50" containerName="openstack-network-exporter" containerID="cri-o://b4124e3465da381167bea724f4e7c9031ca9de71759512fab97869e75d371785" gracePeriod=300 Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.942992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbj6\" (UniqueName: \"kubernetes.io/projected/eb9be543-7566-4423-b4ed-5d9596cf21a4-kube-api-access-prbj6\") pod \"novacell0b969-account-delete-2lntr\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.964934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46nnd\" (UniqueName: \"kubernetes.io/projected/15c7de97-b620-4e9b-8e17-27da546d6fb8-kube-api-access-46nnd\") pod \"novaapi04c7-account-delete-q782b\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.984484 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh2e22-account-delete-nqn9k"] Nov 22 09:39:41 crc kubenswrapper[4858]: I1122 09:39:41.985869 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.005309 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh2e22-account-delete-nqn9k"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.013595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmr4\" (UniqueName: \"kubernetes.io/projected/64b43663-db69-4e42-a14e-85cc35b48dc3-kube-api-access-dlmr4\") pod \"aodh2e22-account-delete-nqn9k\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.014016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b43663-db69-4e42-a14e-85cc35b48dc3-operator-scripts\") pod \"aodh2e22-account-delete-nqn9k\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.024107 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.056814 4858 scope.go:117] "RemoveContainer" containerID="646e24902e0a82f14865582bea7fe955b2f7d63642a56f1444d831742d8a43c6" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.107621 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.126487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmr4\" (UniqueName: \"kubernetes.io/projected/64b43663-db69-4e42-a14e-85cc35b48dc3-kube-api-access-dlmr4\") pod \"aodh2e22-account-delete-nqn9k\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.126663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b43663-db69-4e42-a14e-85cc35b48dc3-operator-scripts\") pod \"aodh2e22-account-delete-nqn9k\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.127446 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b43663-db69-4e42-a14e-85cc35b48dc3-operator-scripts\") pod \"aodh2e22-account-delete-nqn9k\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.136975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.137013 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="ovsdbserver-sb" containerID="cri-o://ed0fb13c9d313c0057e131d50ff2e7899fad257cf3bed38b14bae9253765bc88" gracePeriod=300 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.150143 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmr4\" (UniqueName: \"kubernetes.io/projected/64b43663-db69-4e42-a14e-85cc35b48dc3-kube-api-access-dlmr4\") pod \"aodh2e22-account-delete-nqn9k\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.219723 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.220524 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-1" podUID="ac8a3c95-b813-4505-925f-8e750fd8f963" containerName="openstack-network-exporter" containerID="cri-o://32aede7cf2cd8fdaade9c4d13c3a6835fbd9e148895541f8bdc84ec2cf52db21" gracePeriod=300 Nov 22 09:39:42 crc kubenswrapper[4858]: E1122 09:39:42.235949 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 09:39:42 crc kubenswrapper[4858]: E1122 09:39:42.236207 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data podName:59060e41-09d2-4441-8563-5302fd77a52d nodeName:}" failed. No retries permitted until 2025-11-22 09:39:44.236195646 +0000 UTC m=+8946.077618652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data") pod "rabbitmq-server-0" (UID: "59060e41-09d2-4441-8563-5302fd77a52d") : configmap "rabbitmq-config-data" not found Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.286392 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.287147 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-2" podUID="11d61978-7f41-441f-b6b7-18c00e684f58" containerName="openstack-network-exporter" containerID="cri-o://f6309275ad269cfccf8fdb8ad351b57fbe51319c375fa31979be8f85cecadbc3" gracePeriod=300 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.310343 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.318486 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="openstack-network-exporter" containerID="cri-o://b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217" gracePeriod=300 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.350230 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gsgmr"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.368618 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gsgmr"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.389685 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.389918 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="cinder-scheduler" containerID="cri-o://ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650" gracePeriod=30 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.392514 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="probe" containerID="cri-o://44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513" gracePeriod=30 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.419894 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.428416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7sct"] Nov 22 09:39:42 crc kubenswrapper[4858]: E1122 09:39:42.445879 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:42 crc kubenswrapper[4858]: E1122 09:39:42.445928 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data podName:e8384e15-b249-44a6-8d35-8a2066b3da7b nodeName:}" failed. No retries permitted until 2025-11-22 09:39:44.445915767 +0000 UTC m=+8946.287338773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.530969 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7sct"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.589780 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="ovsdbserver-nb" containerID="cri-o://dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d" gracePeriod=300 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.780848 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c86c95b8-8h6xv"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.781566 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69c86c95b8-8h6xv" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon-log" containerID="cri-o://6a0b3388f3b07e344f0aa419e784922d216578af23d8b90ea1471324a5e1ccfa" gracePeriod=30 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.782143 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69c86c95b8-8h6xv" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" containerID="cri-o://8ed3ae5cedd53bd3a69e7e010ea65e7a6fc66b139c069cae1957b6aaf00b873d" gracePeriod=30 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.788764 4858 generic.go:334] "Generic (PLEG): container finished" podID="c7a007d2-4a0e-44bd-981f-8a56cbd45c50" containerID="b4124e3465da381167bea724f4e7c9031ca9de71759512fab97869e75d371785" exitCode=2 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.788947 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"c7a007d2-4a0e-44bd-981f-8a56cbd45c50","Type":"ContainerDied","Data":"b4124e3465da381167bea724f4e7c9031ca9de71759512fab97869e75d371785"} Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.807155 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac8a3c95-b813-4505-925f-8e750fd8f963" containerID="32aede7cf2cd8fdaade9c4d13c3a6835fbd9e148895541f8bdc84ec2cf52db21" exitCode=2 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.807676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ac8a3c95-b813-4505-925f-8e750fd8f963","Type":"ContainerDied","Data":"32aede7cf2cd8fdaade9c4d13c3a6835fbd9e148895541f8bdc84ec2cf52db21"} Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.903768 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_81d944bd-93c5-4863-96df-f83a4ff1db9b/ovsdbserver-nb/0.log" Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.903838 4858 generic.go:334] "Generic (PLEG): container finished" podID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerID="b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217" exitCode=2 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.903914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"81d944bd-93c5-4863-96df-f83a4ff1db9b","Type":"ContainerDied","Data":"b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217"} Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.916645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerStarted","Data":"0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868"} Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.917672 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59478d75c9-xdf7j"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.917915 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerName="dnsmasq-dns" containerID="cri-o://c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7" gracePeriod=10 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.943928 4858 generic.go:334] "Generic (PLEG): container finished" podID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerID="67e1251b14e7b433d9d3a2e216ea575663775c6d203919cad452b0e46788dce2" exitCode=2 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.943999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4","Type":"ContainerDied","Data":"67e1251b14e7b433d9d3a2e216ea575663775c6d203919cad452b0e46788dce2"} Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.957963 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.958181 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api-log" containerID="cri-o://c02bcf924af8e4c2d6ca90bd8a608ea834531a49916fa28f1e8aadbb6103b5f6" gracePeriod=30 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.958261 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api" containerID="cri-o://0d9c14545905e3cda2f017bb37cf1a67c2243ee303a9eec348eaebba94004931" gracePeriod=30 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.976045 4858 generic.go:334] "Generic (PLEG): container finished" podID="11d61978-7f41-441f-b6b7-18c00e684f58" containerID="f6309275ad269cfccf8fdb8ad351b57fbe51319c375fa31979be8f85cecadbc3" exitCode=2 Nov 22 09:39:42 crc kubenswrapper[4858]: I1122 09:39:42.976136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"11d61978-7f41-441f-b6b7-18c00e684f58","Type":"ContainerDied","Data":"f6309275ad269cfccf8fdb8ad351b57fbe51319c375fa31979be8f85cecadbc3"} Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.003113 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zgsvc"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.008598 4858 generic.go:334] "Generic (PLEG): container finished" podID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" containerID="89f5cb55d37c396ec6fc110b271605257fc3966e0a587600d0b34d9feee774c6" exitCode=137 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.017882 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zgsvc"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.043388 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.043699 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-api" containerID="cri-o://0e440c8b36113cc42b6ddd774ee75f89f04beccb033e5b4e3d7827901f46cf17" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.044144 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-listener" containerID="cri-o://3967676f8e95adb5ee5014b410d8fa6ed22970b37607a556cdda336ed986c928" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.044209 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-notifier" containerID="cri-o://87056ef5db220c131bb3ec20fed1d41cb562684d629666af50d9c09b8a77410d" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.044248 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-evaluator" containerID="cri-o://5e0c7f07b403939e0b30d379cfb2f6f7c0e0f0331d4da8acd1e935938d2cf0d3" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.058833 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.070445 4858 generic.go:334] "Generic (PLEG): container finished" podID="ca469349-62e4-4ab0-bba0-66bc5d4c1956" containerID="216b5090996fa3e09c1c2111cb8e5434b009b80a4f59aa7219be2938522a73c1" exitCode=2 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.070544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ca469349-62e4-4ab0-bba0-66bc5d4c1956","Type":"ContainerDied","Data":"216b5090996fa3e09c1c2111cb8e5434b009b80a4f59aa7219be2938522a73c1"} Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.086110 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dc86c6f7-88xlp"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.086474 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dc86c6f7-88xlp" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-api" containerID="cri-o://412e82414159e7ac3a4aa5c2cccb641255d6bef151b2b51f1cf479bfc2da047b" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.086955 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dc86c6f7-88xlp" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-httpd" containerID="cri-o://96d625e1d523edde845f7074cc2ca87e3c4b5c2c1898cd03e2d07a4a1aab3b91" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.116603 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron6b93-account-delete-n54tn"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.125201 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_178ee462-fc5c-4fc1-bdbc-22251a60c6a1/ovsdbserver-sb/0.log" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.125252 4858 generic.go:334] "Generic (PLEG): container finished" podID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerID="81501abf74e9ba60e651cc176ffd7cdf6b825e1cbbf8a19b79bc21f69b3efd8e" exitCode=2 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.125273 4858 generic.go:334] "Generic (PLEG): container finished" podID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerID="ed0fb13c9d313c0057e131d50ff2e7899fad257cf3bed38b14bae9253765bc88" exitCode=143 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.125755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"178ee462-fc5c-4fc1-bdbc-22251a60c6a1","Type":"ContainerDied","Data":"81501abf74e9ba60e651cc176ffd7cdf6b825e1cbbf8a19b79bc21f69b3efd8e"} Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.125789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"178ee462-fc5c-4fc1-bdbc-22251a60c6a1","Type":"ContainerDied","Data":"ed0fb13c9d313c0057e131d50ff2e7899fad257cf3bed38b14bae9253765bc88"} Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.140232 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.140566 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-log" containerID="cri-o://0aa85a37e97e72c8efa2b73911f4eef75c838f7fb6915cad5a5299b8caecf2b7" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.140727 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-httpd" containerID="cri-o://428bc38b18119c4305d118eb828b9d35bf76f7f0732bf893cb1b34f626cfecdb" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.141468 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="59060e41-09d2-4441-8563-5302fd77a52d" containerName="rabbitmq" containerID="cri-o://48243763ff91a842163928192fc2ea246f302325792033ccd2427519d16f31b0" gracePeriod=604800 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.157237 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.174307 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gr6lp" podStartSLOduration=4.371972832 podStartE2EDuration="12.174278997s" podCreationTimestamp="2025-11-22 09:39:31 +0000 UTC" firstStartedPulling="2025-11-22 09:39:33.430417888 +0000 UTC m=+8935.271840894" lastFinishedPulling="2025-11-22 09:39:41.232724053 +0000 UTC m=+8943.074147059" observedRunningTime="2025-11-22 09:39:42.975752754 +0000 UTC m=+8944.817175760" watchObservedRunningTime="2025-11-22 09:39:43.174278997 +0000 UTC m=+8945.015702003" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.217588 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74977f9d76-k6dlw"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.218058 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74977f9d76-k6dlw" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-log" containerID="cri-o://9329c10d2543dce5392c0af5a7d61ebfe67fba02c6cbc2e7b19da53775192377" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.218455 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74977f9d76-k6dlw" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-api" containerID="cri-o://065e51e4b82bfd09ef58eeccb1d741e51d5167ffe8e2bd644d87495b643cbfb5" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.249145 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.249475 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-log" containerID="cri-o://ff441736a1f0bdc42df1f5f8ac8566ce878fc447391e44d3d92513cd53973a0c" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.249923 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-httpd" containerID="cri-o://2616f6e010c2f47567c82c59233a83474d6307221bc0e3019310b01ca819c5e0" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.265658 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-hpwgh"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.286476 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-hpwgh"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.315243 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.315507 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-log" containerID="cri-o://e8e0687f6df23a2cb8e5fca6694574c9fc79ab632a7cbd059eef1fbf16f9f711" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.315912 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-api" containerID="cri-o://57907d16e0311bc717a33ae1f359ab9a46d08e1abe4ca40d8893d8086ef774ac" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.352392 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="rabbitmq" containerID="cri-o://7a1b9aa9bf7fdcfe3b6dd842717d88716652a749a754b92b43ad5226f5e6ec33" gracePeriod=604800 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.399550 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: E1122 09:39:43.405101 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:43 crc kubenswrapper[4858]: E1122 09:39:43.423468 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:43 crc kubenswrapper[4858]: E1122 09:39:43.429597 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:43 crc kubenswrapper[4858]: E1122 09:39:43.429649 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.442288 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.8:5671: connect: connection refused" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.451773 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.452050 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-log" containerID="cri-o://ffde0c5535e5575efcd312c44becdc816a46ec2830edcdf8c7cac194047d0a3d" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.452200 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-metadata" containerID="cri-o://2a569e7aef5c1478654a43f23ff834b089ab7b81d90062f8bc434d0602c00539" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.480396 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-865798754b-wklbv"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.480662 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-865798754b-wklbv" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api-log" containerID="cri-o://761bd583458b9228a46e2048c9579370d0d1ec7104acbbde74a8d9d0c1f15d55" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.481081 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-865798754b-wklbv" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api" containerID="cri-o://9f5e64397fcfbf30b8e57de5cd79bbaa5aa1cfb6dc41d738673c9552face9f4f" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.502422 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84d7f7895d-dzj8l"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.502705 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener-log" containerID="cri-o://04b8eabacd40872b6a27353dabf534bacf39a98dba7ea7e75a7efb827a971e4a" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.503096 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener" containerID="cri-o://55ca57bd132c43b406a7e2f78d44ccc4ccfef51b3c54f7deb21f3fcdf315f42d" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.516856 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-575cc76dd7-swvhx"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.517093 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-575cc76dd7-swvhx" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker-log" containerID="cri-o://452a9ab7b1b4a1974cdad0d365d5a8a6fa77348bb175f5268abb56ed7e86bf62" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.517434 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-575cc76dd7-swvhx" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker" containerID="cri-o://cddb36142f710de01a2a2604912a1de51c98b16778d69d3541cb2e91fd0be10f" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.529444 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-554bc84945-x99pt"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.529691 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-554bc84945-x99pt" podUID="f559e642-5710-41ad-b508-a76cf28d62ca" containerName="heat-api" containerID="cri-o://bfc6172709f280143555d90466293bfa2c52e1d1c69bc716075bf79ffcfb671e" gracePeriod=60 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.567515 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c75b585-b5a6-4cc7-a73d-8a56862d2aef" path="/var/lib/kubelet/pods/2c75b585-b5a6-4cc7-a73d-8a56862d2aef/volumes" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.572414 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a975fe9f-1fb8-4c7a-b88b-fb806065a5f3" path="/var/lib/kubelet/pods/a975fe9f-1fb8-4c7a-b88b-fb806065a5f3/volumes" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.573023 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0" path="/var/lib/kubelet/pods/dd03c4f4-cb47-4f93-a0f9-01ba93c3ecb0/volumes" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.573561 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2dc0863-1fed-426e-91b9-4112507cd4a2" path="/var/lib/kubelet/pods/e2dc0863-1fed-426e-91b9-4112507cd4a2/volumes" Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.574143 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7f4fc69954-bcngv"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.574182 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58695b9cb9-h2cjl"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.574370 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" podUID="c44b3c43-4aed-4726-a49e-693cd279bca6" containerName="heat-cfnapi" containerID="cri-o://3e9f60a9242f5ea9166f64aec3d772c195c831a12e40616f61d15e94761b65aa" gracePeriod=60 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.574735 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" containerID="cri-o://907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" gracePeriod=60 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.629766 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.629973 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerName="nova-cell0-conductor-conductor" containerID="cri-o://f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.691378 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.691608 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerName="nova-cell1-conductor-conductor" containerID="cri-o://94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.721384 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cindera285-account-delete-9xdwn"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.741964 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.742304 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/alertmanager-metric-storage-0" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="alertmanager" containerID="cri-o://918209d7d13a78e17d2265b8b6e9586b5d6360719a05e32d9d26a420c7ab48d1" gracePeriod=120 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.742739 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/alertmanager-metric-storage-0" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="config-reloader" containerID="cri-o://4f38792396e3d0b3fe3482c717f089c4843b54559f52cb8be1e2ed5bed2a403e" gracePeriod=120 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.763046 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.763518 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cd68b47b-06e7-4e59-aad6-cae8c376573d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8070c89d3808b68f0b98fb9cbd32312e22d937be61d9757f60eb633a06522feb" gracePeriod=30 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.783995 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.784348 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="prometheus" containerID="cri-o://6ad6597a2759cc61aa76fd00e3a64b4ee32679b91be7e663c37976b726f4357e" gracePeriod=600 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.784891 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="thanos-sidecar" containerID="cri-o://d2f8ef66b6a8e77f76210f4a45fe3aca5169cb0000916d8304fd25265cec38d1" gracePeriod=600 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.784954 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="config-reloader" containerID="cri-o://dd0d4a38c6628e6cd6833ecc0f37a9b78f79b92faf3d87c5ffac41a4d3c25c15" gracePeriod=600 Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.813089 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:39:43 crc kubenswrapper[4858]: I1122 09:39:43.813297 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="419367a7-1838-4692-b6fc-f266985765d7" containerName="nova-scheduler-scheduler" containerID="cri-o://9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364" gracePeriod=30 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.045122 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican04c4-account-delete-8pfbj"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.102706 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_178ee462-fc5c-4fc1-bdbc-22251a60c6a1/ovsdbserver-sb/0.log" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.105782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.124557 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="galera" containerID="cri-o://d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246" gracePeriod=30 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.129001 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.138128 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.138194 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.170879 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.181449 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_81d944bd-93c5-4863-96df-f83a4ff1db9b/ovsdbserver-nb/0.log" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.181519 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.210444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-metrics-certs-tls-certs\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.210478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddht\" (UniqueName: \"kubernetes.io/projected/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-kube-api-access-pddht\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.210520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-config\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.210586 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdbserver-sb-tls-certs\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.214455 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.214526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-combined-ca-bundle\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.214593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-scripts\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.214639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdb-rundir\") pod \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\" (UID: \"178ee462-fc5c-4fc1-bdbc-22251a60c6a1\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.219538 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-config" (OuterVolumeSpecName: "config") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.221972 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.222202 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.224945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-scripts" (OuterVolumeSpecName: "scripts") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.251183 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.251240 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerName="nova-cell0-conductor-conductor" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.257144 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-kube-api-access-pddht" (OuterVolumeSpecName: "kube-api-access-pddht") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "kube-api-access-pddht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.267182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "pvc-f75666bc-124a-43de-b87e-692947cbd508". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.269932 4858 generic.go:334] "Generic (PLEG): container finished" podID="0691c992-818e-46a2-9057-2f9548253076" containerID="918209d7d13a78e17d2265b8b6e9586b5d6360719a05e32d9d26a420c7ab48d1" exitCode=0 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.270013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerDied","Data":"918209d7d13a78e17d2265b8b6e9586b5d6360719a05e32d9d26a420c7ab48d1"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.292036 4858 generic.go:334] "Generic (PLEG): container finished" podID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerID="ffde0c5535e5575efcd312c44becdc816a46ec2830edcdf8c7cac194047d0a3d" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.292122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e952720-9083-48e0-96d1-54f1cfacfbf9","Type":"ContainerDied","Data":"ffde0c5535e5575efcd312c44becdc816a46ec2830edcdf8c7cac194047d0a3d"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.326072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-sb\") pod \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.326523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-config\") pod \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.326926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-scripts\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-dns-svc\") pod \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327067 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdb-rundir\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327142 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-metrics-certs-tls-certs\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4t8s\" (UniqueName: \"kubernetes.io/projected/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-kube-api-access-w4t8s\") pod \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327211 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpvdj\" (UniqueName: \"kubernetes.io/projected/81d944bd-93c5-4863-96df-f83a4ff1db9b-kube-api-access-tpvdj\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-config\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327272 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-nb\") pod \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.327365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdbserver-nb-tls-certs\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.328256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ztvx\" (UniqueName: \"kubernetes.io/projected/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-kube-api-access-2ztvx\") pod \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\" (UID: \"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.328386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-combined-ca-bundle\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.328901 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-config" (OuterVolumeSpecName: "config") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.329866 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-scripts" (OuterVolumeSpecName: "scripts") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.331067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.335361 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") pod \"81d944bd-93c5-4863-96df-f83a4ff1db9b\" (UID: \"81d944bd-93c5-4863-96df-f83a4ff1db9b\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.335453 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config-secret\") pod \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.335486 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config\") pod \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.335528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-combined-ca-bundle\") pod \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\" (UID: \"ced90ddf-eae9-45e2-ae0a-9306ed9873d7\") " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336433 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336458 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336470 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336482 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336494 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pddht\" (UniqueName: \"kubernetes.io/projected/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-kube-api-access-pddht\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336505 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81d944bd-93c5-4863-96df-f83a4ff1db9b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.336531 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f75666bc-124a-43de-b87e-692947cbd508\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") on node \"crc\" " Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.342170 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.342259 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data podName:59060e41-09d2-4441-8563-5302fd77a52d nodeName:}" failed. No retries permitted until 2025-11-22 09:39:48.342240185 +0000 UTC m=+8950.183663191 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data") pod "rabbitmq-server-0" (UID: "59060e41-09d2-4441-8563-5302fd77a52d") : configmap "rabbitmq-config-data" not found Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.389568 4858 generic.go:334] "Generic (PLEG): container finished" podID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerID="c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7" exitCode=0 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.389657 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" event={"ID":"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098","Type":"ContainerDied","Data":"c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.389685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" event={"ID":"81f0d7b5-53a2-4d57-8d3e-fce52b6fd098","Type":"ContainerDied","Data":"92f4e7078287f330b2f9db7c62cecba4b0ea383cb20036c7200124c252b4c6d0"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.389713 4858 scope.go:117] "RemoveContainer" containerID="c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.389905 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.392895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-kube-api-access-2ztvx" (OuterVolumeSpecName: "kube-api-access-2ztvx") pod "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" (UID: "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098"). InnerVolumeSpecName "kube-api-access-2ztvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.393808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-kube-api-access-w4t8s" (OuterVolumeSpecName: "kube-api-access-w4t8s") pod "ced90ddf-eae9-45e2-ae0a-9306ed9873d7" (UID: "ced90ddf-eae9-45e2-ae0a-9306ed9873d7"). InnerVolumeSpecName "kube-api-access-w4t8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.397832 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.398350 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d944bd-93c5-4863-96df-f83a4ff1db9b-kube-api-access-tpvdj" (OuterVolumeSpecName: "kube-api-access-tpvdj") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "kube-api-access-tpvdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.405008 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.422197 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.422547 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement8911-account-delete-lsr9n"] Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.441108 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.441548 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerName="nova-cell1-conductor-conductor" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.441126 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4t8s\" (UniqueName: \"kubernetes.io/projected/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-kube-api-access-w4t8s\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.441602 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpvdj\" (UniqueName: \"kubernetes.io/projected/81d944bd-93c5-4863-96df-f83a4ff1db9b-kube-api-access-tpvdj\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.441618 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ztvx\" (UniqueName: \"kubernetes.io/projected/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-kube-api-access-2ztvx\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.441642 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") on node \"crc\" " Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.441641 4858 generic.go:334] "Generic (PLEG): container finished" podID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerID="c02bcf924af8e4c2d6ca90bd8a608ea834531a49916fa28f1e8aadbb6103b5f6" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.441755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0da6e158-7f6d-434b-bd4a-9a902a5879d9","Type":"ContainerDied","Data":"c02bcf924af8e4c2d6ca90bd8a608ea834531a49916fa28f1e8aadbb6103b5f6"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.493544 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.495219 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_178ee462-fc5c-4fc1-bdbc-22251a60c6a1/ovsdbserver-sb/0.log" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.495369 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"178ee462-fc5c-4fc1-bdbc-22251a60c6a1","Type":"ContainerDied","Data":"66b2030990f3445a98e8b16092db748e3ec88e952625b09bd8cf6dca7cb4085a"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.495652 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.544719 4858 generic.go:334] "Generic (PLEG): container finished" podID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerID="761bd583458b9228a46e2048c9579370d0d1ec7104acbbde74a8d9d0c1f15d55" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.544770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865798754b-wklbv" event={"ID":"48b023fd-a47e-4fac-b75f-50e32cd8ed68","Type":"ContainerDied","Data":"761bd583458b9228a46e2048c9579370d0d1ec7104acbbde74a8d9d0c1f15d55"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.559356 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.569089 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:44 crc kubenswrapper[4858]: E1122 09:39:44.569179 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data podName:e8384e15-b249-44a6-8d35-8a2066b3da7b nodeName:}" failed. No retries permitted until 2025-11-22 09:39:48.569157117 +0000 UTC m=+8950.410580123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.574390 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.631909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.638893 4858 generic.go:334] "Generic (PLEG): container finished" podID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerID="d2f8ef66b6a8e77f76210f4a45fe3aca5169cb0000916d8304fd25265cec38d1" exitCode=0 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.638923 4858 generic.go:334] "Generic (PLEG): container finished" podID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerID="6ad6597a2759cc61aa76fd00e3a64b4ee32679b91be7e663c37976b726f4357e" exitCode=0 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.638974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerDied","Data":"d2f8ef66b6a8e77f76210f4a45fe3aca5169cb0000916d8304fd25265cec38d1"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.639001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerDied","Data":"6ad6597a2759cc61aa76fd00e3a64b4ee32679b91be7e663c37976b726f4357e"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.650592 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance034d-account-delete-lrkfs"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.651815 4858 generic.go:334] "Generic (PLEG): container finished" podID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerID="44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513" exitCode=0 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.651903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d53819e9-9206-49f4-a1a7-2d9459fcc7c7","Type":"ContainerDied","Data":"44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.664479 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heataca4-account-delete-65j4m"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.677199 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.679264 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.679656 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f75666bc-124a-43de-b87e-692947cbd508" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508") on node "crc" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.711549 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0b969-account-delete-2lntr"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.716758 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ced90ddf-eae9-45e2-ae0a-9306ed9873d7" (UID: "ced90ddf-eae9-45e2-ae0a-9306ed9873d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.732151 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_81d944bd-93c5-4863-96df-f83a4ff1db9b/ovsdbserver-nb/0.log" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.732229 4858 generic.go:334] "Generic (PLEG): container finished" podID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerID="dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.732380 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.732693 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"81d944bd-93c5-4863-96df-f83a4ff1db9b","Type":"ContainerDied","Data":"dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.732761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"81d944bd-93c5-4863-96df-f83a4ff1db9b","Type":"ContainerDied","Data":"811a3ce45e287556d16b4d6593dbcfea5e69d932639001cc8c108a7643049658"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.737880 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi04c7-account-delete-q782b"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.747577 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh2e22-account-delete-nqn9k"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.756012 4858 generic.go:334] "Generic (PLEG): container finished" podID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerID="0aa85a37e97e72c8efa2b73911f4eef75c838f7fb6915cad5a5299b8caecf2b7" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.756236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1659016a-e2b7-4dbd-8ad1-56bef9995d64","Type":"ContainerDied","Data":"0aa85a37e97e72c8efa2b73911f4eef75c838f7fb6915cad5a5299b8caecf2b7"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.760620 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-579546c64d-fkr76"] Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.761442 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-579546c64d-fkr76" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-httpd" containerID="cri-o://47732564dcbee4396779be68097333bf6ceab57ebc135dff5097a79c851b70b2" gracePeriod=30 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.765420 4858 generic.go:334] "Generic (PLEG): container finished" podID="879cb25d-5d39-48df-ac21-505127e58fd1" containerID="04b8eabacd40872b6a27353dabf534bacf39a98dba7ea7e75a7efb827a971e4a" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.765512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" event={"ID":"879cb25d-5d39-48df-ac21-505127e58fd1","Type":"ContainerDied","Data":"04b8eabacd40872b6a27353dabf534bacf39a98dba7ea7e75a7efb827a971e4a"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.761512 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-579546c64d-fkr76" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-server" containerID="cri-o://f4d10c4811595086f1768850e4ba22ed889daba484227d3c6748462dbd9d902b" gracePeriod=30 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.773170 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerID="ff441736a1f0bdc42df1f5f8ac8566ce878fc447391e44d3d92513cd53973a0c" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.773512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70","Type":"ContainerDied","Data":"ff441736a1f0bdc42df1f5f8ac8566ce878fc447391e44d3d92513cd53973a0c"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.784327 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerID="9329c10d2543dce5392c0af5a7d61ebfe67fba02c6cbc2e7b19da53775192377" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.784410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74977f9d76-k6dlw" event={"ID":"2e9999f0-5166-4fe0-9110-374b372ff6da","Type":"ContainerDied","Data":"9329c10d2543dce5392c0af5a7d61ebfe67fba02c6cbc2e7b19da53775192377"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.787231 4858 generic.go:334] "Generic (PLEG): container finished" podID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerID="452a9ab7b1b4a1974cdad0d365d5a8a6fa77348bb175f5268abb56ed7e86bf62" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.787271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575cc76dd7-swvhx" event={"ID":"bda2acef-1ebf-4106-b75f-57d3c2a80758","Type":"ContainerDied","Data":"452a9ab7b1b4a1974cdad0d365d5a8a6fa77348bb175f5268abb56ed7e86bf62"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.789915 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.790046 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-f75666bc-124a-43de-b87e-692947cbd508\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f75666bc-124a-43de-b87e-692947cbd508\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.798334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ced90ddf-eae9-45e2-ae0a-9306ed9873d7" (UID: "ced90ddf-eae9-45e2-ae0a-9306ed9873d7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.800823 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.801368 4858 generic.go:334] "Generic (PLEG): container finished" podID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerID="e8e0687f6df23a2cb8e5fca6694574c9fc79ab632a7cbd059eef1fbf16f9f711" exitCode=143 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.801439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3d42a8-0810-462c-abd3-73b770f8fb03","Type":"ContainerDied","Data":"e8e0687f6df23a2cb8e5fca6694574c9fc79ab632a7cbd059eef1fbf16f9f711"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.845516 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron6b93-account-delete-n54tn" event={"ID":"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60","Type":"ContainerStarted","Data":"6e9c0952e8a842b24e6eb1030b650c357a92fcca0232e915e3410d7cf5e6f44b"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.845748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron6b93-account-delete-n54tn" event={"ID":"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60","Type":"ContainerStarted","Data":"a0f025946911279abbd933d6c00ad76b80427f3bf95eeff98a000f0f2302c82b"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.861731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican04c4-account-delete-8pfbj" event={"ID":"7c014721-aa5e-4b1e-93b7-36b6832df6c6","Type":"ContainerStarted","Data":"8c6e5012f5aff653db545323992a0dd2c4aa2259f6aef7f23b60f080e644526f"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.892123 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.892443 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.904856 4858 generic.go:334] "Generic (PLEG): container finished" podID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerID="0e440c8b36113cc42b6ddd774ee75f89f04beccb033e5b4e3d7827901f46cf17" exitCode=0 Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.905142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerDied","Data":"0e440c8b36113cc42b6ddd774ee75f89f04beccb033e5b4e3d7827901f46cf17"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.917517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera285-account-delete-9xdwn" event={"ID":"938520b5-d4e9-489e-8f92-642c144d69bc","Type":"ContainerStarted","Data":"96334a1f6104ea69bbcb68ad506c5f6bd295595b07a50d8b348437f0eb68576a"} Nov 22 09:39:44 crc kubenswrapper[4858]: I1122 09:39:44.917613 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera285-account-delete-9xdwn" event={"ID":"938520b5-d4e9-489e-8f92-642c144d69bc","Type":"ContainerStarted","Data":"cd501d09e7356d65cb9ec1d14255af18626489d931bc6c57bdec78a6513b84bb"} Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.028618 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.028942 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06") on node "crc" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.118473 4858 scope.go:117] "RemoveContainer" containerID="d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.122971 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17e60d85-56f7-4cd0-a0d8-29d2f116bf06\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.309718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-config" (OuterVolumeSpecName: "config") pod "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" (UID: "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.325545 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "178ee462-fc5c-4fc1-bdbc-22251a60c6a1" (UID: "178ee462-fc5c-4fc1-bdbc-22251a60c6a1"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.326989 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.327022 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/178ee462-fc5c-4fc1-bdbc-22251a60c6a1-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.341389 4858 scope.go:117] "RemoveContainer" containerID="c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7" Nov 22 09:39:45 crc kubenswrapper[4858]: E1122 09:39:45.349736 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7\": container with ID starting with c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7 not found: ID does not exist" containerID="c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.349771 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7"} err="failed to get container status \"c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7\": rpc error: code = NotFound desc = could not find container \"c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7\": container with ID starting with c671469c597ccbb12076e9eb206a33ca3af0a8e978536a1446fa77b2cd4d5ce7 not found: ID does not exist" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.349793 4858 scope.go:117] "RemoveContainer" containerID="d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803" Nov 22 09:39:45 crc kubenswrapper[4858]: E1122 09:39:45.354140 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803\": container with ID starting with d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803 not found: ID does not exist" containerID="d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.354168 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803"} err="failed to get container status \"d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803\": rpc error: code = NotFound desc = could not find container \"d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803\": container with ID starting with d8803797b1595c5b0e83c8ba2abb2aa421c56a5d146e36aafe7008df1cffa803 not found: ID does not exist" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.354184 4858 scope.go:117] "RemoveContainer" containerID="81501abf74e9ba60e651cc176ffd7cdf6b825e1cbbf8a19b79bc21f69b3efd8e" Nov 22 09:39:45 crc kubenswrapper[4858]: E1122 09:39:45.563943 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246 is running failed: container process not found" containerID="d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 09:39:45 crc kubenswrapper[4858]: E1122 09:39:45.587728 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246 is running failed: container process not found" containerID="d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 09:39:45 crc kubenswrapper[4858]: E1122 09:39:45.592527 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246 is running failed: container process not found" containerID="d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 09:39:45 crc kubenswrapper[4858]: E1122 09:39:45.592596 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="galera" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.612474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" (UID: "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.637942 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.720699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" (UID: "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.730516 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" (UID: "81f0d7b5-53a2-4d57-8d3e-fce52b6fd098"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.741811 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.741845 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.760356 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-579546c64d-fkr76" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.62:8080/healthcheck\": dial tcp 10.217.1.62:8080: connect: connection refused" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.770874 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-579546c64d-fkr76" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.1.62:8080/healthcheck\": dial tcp 10.217.1.62:8080: connect: connection refused" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.797182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ced90ddf-eae9-45e2-ae0a-9306ed9873d7" (UID: "ced90ddf-eae9-45e2-ae0a-9306ed9873d7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.832172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.844617 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.844872 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ced90ddf-eae9-45e2-ae0a-9306ed9873d7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.865534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "81d944bd-93c5-4863-96df-f83a4ff1db9b" (UID: "81d944bd-93c5-4863-96df-f83a4ff1db9b"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.927466 4858 generic.go:334] "Generic (PLEG): container finished" podID="0691c992-818e-46a2-9057-2f9548253076" containerID="4f38792396e3d0b3fe3482c717f089c4843b54559f52cb8be1e2ed5bed2a403e" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.928525 4858 generic.go:334] "Generic (PLEG): container finished" podID="cd68b47b-06e7-4e59-aad6-cae8c376573d" containerID="8070c89d3808b68f0b98fb9cbd32312e22d937be61d9757f60eb633a06522feb" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.929495 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c014721-aa5e-4b1e-93b7-36b6832df6c6" containerID="09318214b3cc104ed30a7c990ba09e1b574509c4a8da3beeccf696a36b4ce24d" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.931784 4858 generic.go:334] "Generic (PLEG): container finished" podID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerID="dd0d4a38c6628e6cd6833ecc0f37a9b78f79b92faf3d87c5ffac41a4d3c25c15" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.937553 4858 generic.go:334] "Generic (PLEG): container finished" podID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerID="5e0c7f07b403939e0b30d379cfb2f6f7c0e0f0331d4da8acd1e935938d2cf0d3" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.950072 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/81d944bd-93c5-4863-96df-f83a4ff1db9b-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.967292 4858 generic.go:334] "Generic (PLEG): container finished" podID="9926527b-80a8-4a26-bc82-053200dbb73f" containerID="f4d10c4811595086f1768850e4ba22ed889daba484227d3c6748462dbd9d902b" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.967359 4858 generic.go:334] "Generic (PLEG): container finished" podID="9926527b-80a8-4a26-bc82-053200dbb73f" containerID="47732564dcbee4396779be68097333bf6ceab57ebc135dff5097a79c851b70b2" exitCode=0 Nov 22 09:39:45 crc kubenswrapper[4858]: I1122 09:39:45.985808 4858 generic.go:334] "Generic (PLEG): container finished" podID="938520b5-d4e9-489e-8f92-642c144d69bc" containerID="96334a1f6104ea69bbcb68ad506c5f6bd295595b07a50d8b348437f0eb68576a" exitCode=0 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.000389 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance034d-account-delete-lrkfs" podStartSLOduration=5.0003659 podStartE2EDuration="5.0003659s" podCreationTimestamp="2025-11-22 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:39:45.969244834 +0000 UTC m=+8947.810667840" watchObservedRunningTime="2025-11-22 09:39:46.0003659 +0000 UTC m=+8947.841788916" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008176 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerDied","Data":"4f38792396e3d0b3fe3482c717f089c4843b54559f52cb8be1e2ed5bed2a403e"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008221 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0691c992-818e-46a2-9057-2f9548253076","Type":"ContainerDied","Data":"f86bd66e9d534895a13f2670155400e7635f86c8f93cd54e90ce2396d573ab6a"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008235 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86bd66e9d534895a13f2670155400e7635f86c8f93cd54e90ce2396d573ab6a" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd68b47b-06e7-4e59-aad6-cae8c376573d","Type":"ContainerDied","Data":"8070c89d3808b68f0b98fb9cbd32312e22d937be61d9757f60eb633a06522feb"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd68b47b-06e7-4e59-aad6-cae8c376573d","Type":"ContainerDied","Data":"7d8095ec08978ae0c1e4a93cc507fa71880078bd94fbd6699a7a7280ba982da7"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008265 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8095ec08978ae0c1e4a93cc507fa71880078bd94fbd6699a7a7280ba982da7" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican04c4-account-delete-8pfbj" event={"ID":"7c014721-aa5e-4b1e-93b7-36b6832df6c6","Type":"ContainerDied","Data":"09318214b3cc104ed30a7c990ba09e1b574509c4a8da3beeccf696a36b4ce24d"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerDied","Data":"dd0d4a38c6628e6cd6833ecc0f37a9b78f79b92faf3d87c5ffac41a4d3c25c15"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c20945ad-d582-4bb8-a485-c6dbb78207fe","Type":"ContainerDied","Data":"66f8cbe4ac11aa70bbf43ca1dcd36213291997b518de494ef5d8337c78c76cd7"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008310 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f8cbe4ac11aa70bbf43ca1dcd36213291997b518de494ef5d8337c78c76cd7" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi04c7-account-delete-q782b" event={"ID":"15c7de97-b620-4e9b-8e17-27da546d6fb8","Type":"ContainerStarted","Data":"3e0354f6596301fcac89f59c44171064d318ff8985d02887d289086b64642e98"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerDied","Data":"5e0c7f07b403939e0b30d379cfb2f6f7c0e0f0331d4da8acd1e935938d2cf0d3"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance034d-account-delete-lrkfs" event={"ID":"084a15f9-e534-46ad-b38a-17eeb1b6589e","Type":"ContainerStarted","Data":"b4ec36e83fb97861924a17f10f38076cf0809be44ef3d52da5e469db14de059b"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance034d-account-delete-lrkfs" event={"ID":"084a15f9-e534-46ad-b38a-17eeb1b6589e","Type":"ContainerStarted","Data":"9118a6d66e855c4ecab5223b114637d34770ddab34112f5da29558ac484fee65"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0b969-account-delete-2lntr" event={"ID":"eb9be543-7566-4423-b4ed-5d9596cf21a4","Type":"ContainerStarted","Data":"f5302204afff924b438835c055a9015d89098bd53d75ecd91dc452276add1d9c"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008915 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8911-account-delete-lsr9n" event={"ID":"1c0f4278-ebde-458a-85b5-9f95824cee1a","Type":"ContainerStarted","Data":"b6215278e1bc20990b626e0be03145dc61b4dd24770816117ce4a8a7224c3d7a"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8911-account-delete-lsr9n" event={"ID":"1c0f4278-ebde-458a-85b5-9f95824cee1a","Type":"ContainerStarted","Data":"0786ac43f63fc86c1b0ae25e982450b6bafd98213c721a56bab5e454872c99b8"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-579546c64d-fkr76" event={"ID":"9926527b-80a8-4a26-bc82-053200dbb73f","Type":"ContainerDied","Data":"f4d10c4811595086f1768850e4ba22ed889daba484227d3c6748462dbd9d902b"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-579546c64d-fkr76" event={"ID":"9926527b-80a8-4a26-bc82-053200dbb73f","Type":"ContainerDied","Data":"47732564dcbee4396779be68097333bf6ceab57ebc135dff5097a79c851b70b2"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heataca4-account-delete-65j4m" event={"ID":"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6","Type":"ContainerStarted","Data":"660c340e4e0f621b51dea65cd5896ccccfe58542175983992935a581d5a73832"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.008988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera285-account-delete-9xdwn" event={"ID":"938520b5-d4e9-489e-8f92-642c144d69bc","Type":"ContainerDied","Data":"96334a1f6104ea69bbcb68ad506c5f6bd295595b07a50d8b348437f0eb68576a"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.011486 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement8911-account-delete-lsr9n" podStartSLOduration=5.011472535 podStartE2EDuration="5.011472535s" podCreationTimestamp="2025-11-22 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:39:45.99101245 +0000 UTC m=+8947.832435456" watchObservedRunningTime="2025-11-22 09:39:46.011472535 +0000 UTC m=+8947.852895541" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.041751 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh2e22-account-delete-nqn9k" event={"ID":"64b43663-db69-4e42-a14e-85cc35b48dc3","Type":"ContainerStarted","Data":"1ae86203c3f1fb6500a30e8580f83b2be9b6bbad04cc76c0f981952f2add976e"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.093740 4858 generic.go:334] "Generic (PLEG): container finished" podID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerID="d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246" exitCode=0 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.093849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11074703-ddac-49f9-b53d-5ec6c721af7d","Type":"ContainerDied","Data":"d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.093881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11074703-ddac-49f9-b53d-5ec6c721af7d","Type":"ContainerDied","Data":"e7fed850d1b081ef4e940dc9b604aca39156f85ad5537de59f52bb8bf89da8c6"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.093893 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7fed850d1b081ef4e940dc9b604aca39156f85ad5537de59f52bb8bf89da8c6" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.101312 4858 generic.go:334] "Generic (PLEG): container finished" podID="d38ef80a-bbad-4072-a37b-1e355a943447" containerID="96d625e1d523edde845f7074cc2ca87e3c4b5c2c1898cd03e2d07a4a1aab3b91" exitCode=0 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.101386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc86c6f7-88xlp" event={"ID":"d38ef80a-bbad-4072-a37b-1e355a943447","Type":"ContainerDied","Data":"96d625e1d523edde845f7074cc2ca87e3c4b5c2c1898cd03e2d07a4a1aab3b91"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.107934 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" containerID="6e9c0952e8a842b24e6eb1030b650c357a92fcca0232e915e3410d7cf5e6f44b" exitCode=0 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.107978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron6b93-account-delete-n54tn" event={"ID":"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60","Type":"ContainerDied","Data":"6e9c0952e8a842b24e6eb1030b650c357a92fcca0232e915e3410d7cf5e6f44b"} Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.113164 4858 scope.go:117] "RemoveContainer" containerID="ed0fb13c9d313c0057e131d50ff2e7899fad257cf3bed38b14bae9253765bc88" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.174233 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.267689 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-cluster-tls-config\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.268791 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxb6p\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-kube-api-access-dxb6p\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.269004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-config-out\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.269199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-tls-assets\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.269375 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-config-volume\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.269621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-web-config\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.269877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-alertmanager-metric-storage-db\") pod \"0691c992-818e-46a2-9057-2f9548253076\" (UID: \"0691c992-818e-46a2-9057-2f9548253076\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.277887 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-config-out" (OuterVolumeSpecName: "config-out") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.277906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-config-volume" (OuterVolumeSpecName: "config-volume") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.278072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-kube-api-access-dxb6p" (OuterVolumeSpecName: "kube-api-access-dxb6p") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "kube-api-access-dxb6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.279155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.279497 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-alertmanager-metric-storage-db" (OuterVolumeSpecName: "alertmanager-metric-storage-db") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "alertmanager-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.311534 4858 scope.go:117] "RemoveContainer" containerID="89f5cb55d37c396ec6fc110b271605257fc3966e0a587600d0b34d9feee774c6" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.380236 4858 reconciler_common.go:293] "Volume detached for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-alertmanager-metric-storage-db\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.380273 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxb6p\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-kube-api-access-dxb6p\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.380284 4858 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0691c992-818e-46a2-9057-2f9548253076-config-out\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.380292 4858 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0691c992-818e-46a2-9057-2f9548253076-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.380302 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.463552 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.463871 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.484879 4858 reconciler_common.go:293] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-cluster-tls-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.513461 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.70:8776/healthcheck\": read tcp 10.217.0.2:43636->10.217.1.70:8776: read: connection reset by peer" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.531475 4858 scope.go:117] "RemoveContainer" containerID="b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.575332 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.586684 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-tls-assets\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.586896 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.586946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c20945ad-d582-4bb8-a485-c6dbb78207fe-prometheus-metric-storage-rulefiles-0\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.586967 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.586996 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c20945ad-d582-4bb8-a485-c6dbb78207fe-config-out\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.587011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-secret-combined-ca-bundle\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.587069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-config\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.587084 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.587110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-thanos-prometheus-http-client-file\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.587139 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzjqf\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-kube-api-access-tzjqf\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.587199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.606073 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.608272 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20945ad-d582-4bb8-a485-c6dbb78207fe-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.608792 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.608964 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.625389 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.640732 4858 scope.go:117] "RemoveContainer" containerID="dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.656253 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.676902 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.677759 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-kube-api-access-tzjqf" (OuterVolumeSpecName: "kube-api-access-tzjqf") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "kube-api-access-tzjqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.678772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20945ad-d582-4bb8-a485-c6dbb78207fe-config-out" (OuterVolumeSpecName: "config-out") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.685310 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.685436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.685506 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-config" (OuterVolumeSpecName: "config") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.686375 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.687799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688240 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-combined-ca-bundle\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688291 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-etc-swift\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-combined-ca-bundle\") pod \"cd68b47b-06e7-4e59-aad6-cae8c376573d\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688357 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/938520b5-d4e9-489e-8f92-642c144d69bc-operator-scripts\") pod \"938520b5-d4e9-489e-8f92-642c144d69bc\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-public-tls-certs\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-config-data\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-config-data\") pod \"cd68b47b-06e7-4e59-aad6-cae8c376573d\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khv22\" (UniqueName: \"kubernetes.io/projected/cd68b47b-06e7-4e59-aad6-cae8c376573d-kube-api-access-khv22\") pod \"cd68b47b-06e7-4e59-aad6-cae8c376573d\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-galera-tls-certs\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688545 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-internal-tls-certs\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688565 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-run-httpd\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688587 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-log-httpd\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-kolla-config\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.688667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l49dz\" (UniqueName: \"kubernetes.io/projected/938520b5-d4e9-489e-8f92-642c144d69bc-kube-api-access-l49dz\") pod \"938520b5-d4e9-489e-8f92-642c144d69bc\" (UID: \"938520b5-d4e9-489e-8f92-642c144d69bc\") " Nov 22 09:39:46 crc kubenswrapper[4858]: W1122 09:39:46.690031 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c20945ad-d582-4bb8-a485-c6dbb78207fe/volumes/kubernetes.io~secret/web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.690048 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.698385 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.699927 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.700420 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.700607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938520b5-d4e9-489e-8f92-642c144d69bc-kube-api-access-l49dz" (OuterVolumeSpecName: "kube-api-access-l49dz") pod "938520b5-d4e9-489e-8f92-642c144d69bc" (UID: "938520b5-d4e9-489e-8f92-642c144d69bc"). InnerVolumeSpecName "kube-api-access-l49dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.706831 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938520b5-d4e9-489e-8f92-642c144d69bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "938520b5-d4e9-489e-8f92-642c144d69bc" (UID: "938520b5-d4e9-489e-8f92-642c144d69bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-default\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707081 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rmdz\" (UniqueName: \"kubernetes.io/projected/11074703-ddac-49f9-b53d-5ec6c721af7d-kube-api-access-2rmdz\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-combined-ca-bundle\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-nova-novncproxy-tls-certs\") pod \"cd68b47b-06e7-4e59-aad6-cae8c376573d\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-operator-scripts\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-generated\") pod \"11074703-ddac-49f9-b53d-5ec6c721af7d\" (UID: \"11074703-ddac-49f9-b53d-5ec6c721af7d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707355 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk6hc\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-kube-api-access-jk6hc\") pod \"9926527b-80a8-4a26-bc82-053200dbb73f\" (UID: \"9926527b-80a8-4a26-bc82-053200dbb73f\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.707388 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-vencrypt-tls-certs\") pod \"cd68b47b-06e7-4e59-aad6-cae8c376573d\" (UID: \"cd68b47b-06e7-4e59-aad6-cae8c376573d\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708084 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708096 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708105 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9926527b-80a8-4a26-bc82-053200dbb73f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708113 4858 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708123 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzjqf\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-kube-api-access-tzjqf\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708131 4858 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708139 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l49dz\" (UniqueName: \"kubernetes.io/projected/938520b5-d4e9-489e-8f92-642c144d69bc-kube-api-access-l49dz\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708148 4858 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708157 4858 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c20945ad-d582-4bb8-a485-c6dbb78207fe-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708165 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c20945ad-d582-4bb8-a485-c6dbb78207fe-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708174 4858 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708184 4858 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c20945ad-d582-4bb8-a485-c6dbb78207fe-config-out\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708193 4858 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.708201 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/938520b5-d4e9-489e-8f92-642c144d69bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.714304 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.715494 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.731270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.732851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd68b47b-06e7-4e59-aad6-cae8c376573d-kube-api-access-khv22" (OuterVolumeSpecName: "kube-api-access-khv22") pod "cd68b47b-06e7-4e59-aad6-cae8c376573d" (UID: "cd68b47b-06e7-4e59-aad6-cae8c376573d"). InnerVolumeSpecName "kube-api-access-khv22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.738686 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.764604 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-865798754b-wklbv" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.1.52:9311/healthcheck\": read tcp 10.217.0.2:44420->10.217.1.52:9311: read: connection reset by peer" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.765312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11074703-ddac-49f9-b53d-5ec6c721af7d-kube-api-access-2rmdz" (OuterVolumeSpecName: "kube-api-access-2rmdz") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "kube-api-access-2rmdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.777758 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-865798754b-wklbv" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.1.52:9311/healthcheck\": read tcp 10.217.0.2:44404->10.217.1.52:9311: read: connection reset by peer" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.778574 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812859 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812883 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812893 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khv22\" (UniqueName: \"kubernetes.io/projected/cd68b47b-06e7-4e59-aad6-cae8c376573d-kube-api-access-khv22\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812901 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812892 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-kube-api-access-jk6hc" (OuterVolumeSpecName: "kube-api-access-jk6hc") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "kube-api-access-jk6hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812909 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rmdz\" (UniqueName: \"kubernetes.io/projected/11074703-ddac-49f9-b53d-5ec6c721af7d-kube-api-access-2rmdz\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.812965 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11074703-ddac-49f9-b53d-5ec6c721af7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.814701 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.814995 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-central-agent" containerID="cri-o://921a60c315076bfa09bfff124ab92deecdc0625f09d81b3bb232d4ef1e293e81" gracePeriod=30 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.815212 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="proxy-httpd" containerID="cri-o://7c22c1647b976812b9a9e2e33c7532864b50fff449effa75e59831dd2b9c3c8f" gracePeriod=30 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.815306 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-notification-agent" containerID="cri-o://37be865a00cf89c403b4aeab789ef0fd27e0c3496d6c037ceca384efb5e151a4" gracePeriod=30 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.815499 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="sg-core" containerID="cri-o://c025194ddf7a068b573c198cafa5d2010ef4df2be27ccc43f8c168cace634da0" gracePeriod=30 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.905743 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.906011 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4cf713f2-824f-4d23-bb3a-1b1f7ef99020" containerName="kube-state-metrics" containerID="cri-o://cd4ac53c7c037b114448c74d1fb5ca115e64028fa709acf595b4d3e033563293" gracePeriod=30 Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.918944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-operator-scripts\") pod \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.919105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr565\" (UniqueName: \"kubernetes.io/projected/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-kube-api-access-gr565\") pod \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\" (UID: \"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60\") " Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.931299 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" (UID: "ce9f6a1a-f6db-4db1-a07e-62baedc8fc60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.947784 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk6hc\" (UniqueName: \"kubernetes.io/projected/9926527b-80a8-4a26-bc82-053200dbb73f-kube-api-access-jk6hc\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:46 crc kubenswrapper[4858]: I1122 09:39:46.948095 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.009612 4858 scope.go:117] "RemoveContainer" containerID="b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217" Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.010533 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217\": container with ID starting with b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217 not found: ID does not exist" containerID="b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.010570 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217"} err="failed to get container status \"b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217\": rpc error: code = NotFound desc = could not find container \"b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217\": container with ID starting with b3eb7275dc0937edd210bad381b4fa805fe13e31dc402d39c1b0b8bd0e933217 not found: ID does not exist" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.010614 4858 scope.go:117] "RemoveContainer" containerID="dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d" Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.011427 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d\": container with ID starting with dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d not found: ID does not exist" containerID="dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.011459 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d"} err="failed to get container status \"dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d\": rpc error: code = NotFound desc = could not find container \"dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d\": container with ID starting with dfa015bbade9ec2e60ee8909f2b7bcae0a69dc69f4bc2df25ebf7cf4f057bf5d not found: ID does not exist" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.032420 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-554bc84945-x99pt" podUID="f559e642-5710-41ad-b508-a76cf28d62ca" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.133:8004/healthcheck\": read tcp 10.217.0.2:59734->10.217.1.133:8004: read: connection reset by peer" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.040951 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" podUID="c44b3c43-4aed-4726-a49e-693cd279bca6" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.134:8000/healthcheck\": dial tcp 10.217.1.134:8000: connect: connection refused" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.042062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-kube-api-access-gr565" (OuterVolumeSpecName: "kube-api-access-gr565") pod "ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" (UID: "ce9f6a1a-f6db-4db1-a07e-62baedc8fc60"). InnerVolumeSpecName "kube-api-access-gr565". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.055604 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr565\" (UniqueName: \"kubernetes.io/projected/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60-kube-api-access-gr565\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.059993 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.060486 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="b4271125-14af-4748-97ad-ed766b2d26b8" containerName="memcached" containerID="cri-o://0e9af0329f586f29a072f29f596f2dfaa4a85abfbc8d919d8bc5c0646f5a690e" gracePeriod=30 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.069028 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7567c6b846-s845h"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.069248 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7567c6b846-s845h" podUID="1354cd0c-52c3-4174-b012-21a2b5ea8324" containerName="keystone-api" containerID="cri-o://f098abc2e40e7e1a013de3bcdfe604e5a7ae91217777b7915ebd28ba5482db6d" gracePeriod=30 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.069637 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.113:8775/\": read tcp 10.217.0.2:57392->10.217.1.113:8775: read: connection reset by peer" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.069907 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.113:8775/\": read tcp 10.217.0.2:57376->10.217.1.113:8775: read: connection reset by peer" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.088102 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.112748 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-tfssl"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.123892 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-tfssl"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.138594 4858 generic.go:334] "Generic (PLEG): container finished" podID="f559e642-5710-41ad-b508-a76cf28d62ca" containerID="bfc6172709f280143555d90466293bfa2c52e1d1c69bc716075bf79ffcfb671e" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.138668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554bc84945-x99pt" event={"ID":"f559e642-5710-41ad-b508-a76cf28d62ca","Type":"ContainerDied","Data":"bfc6172709f280143555d90466293bfa2c52e1d1c69bc716075bf79ffcfb671e"} Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.140604 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b podName:c20945ad-d582-4bb8-a485-c6dbb78207fe nodeName:}" failed. No retries permitted until 2025-11-22 09:39:47.640583179 +0000 UTC m=+8949.482006185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "prometheus-metric-storage-db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.178475 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh2e22-account-delete-nqn9k"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.190056 4858 generic.go:334] "Generic (PLEG): container finished" podID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerID="c025194ddf7a068b573c198cafa5d2010ef4df2be27ccc43f8c168cace634da0" exitCode=2 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.190181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerDied","Data":"c025194ddf7a068b573c198cafa5d2010ef4df2be27ccc43f8c168cace634da0"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.191628 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2e22-account-create-qkb8w"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.200076 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2e22-account-create-qkb8w"] Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.201609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-579546c64d-fkr76" event={"ID":"9926527b-80a8-4a26-bc82-053200dbb73f","Type":"ContainerDied","Data":"a1ef9ed064dd54abc9efff79d9dac23ea89e2cdcbd12b540f9bfffe0a4b59651"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.201663 4858 scope.go:117] "RemoveContainer" containerID="f4d10c4811595086f1768850e4ba22ed889daba484227d3c6748462dbd9d902b" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.201755 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-579546c64d-fkr76" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.284654 4858 generic.go:334] "Generic (PLEG): container finished" podID="084a15f9-e534-46ad-b38a-17eeb1b6589e" containerID="b4ec36e83fb97861924a17f10f38076cf0809be44ef3d52da5e469db14de059b" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.284711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance034d-account-delete-lrkfs" event={"ID":"084a15f9-e534-46ad-b38a-17eeb1b6589e","Type":"ContainerDied","Data":"b4ec36e83fb97861924a17f10f38076cf0809be44ef3d52da5e469db14de059b"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.323198 4858 generic.go:334] "Generic (PLEG): container finished" podID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerID="0d9c14545905e3cda2f017bb37cf1a67c2243ee303a9eec348eaebba94004931" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.323263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0da6e158-7f6d-434b-bd4a-9a902a5879d9","Type":"ContainerDied","Data":"0d9c14545905e3cda2f017bb37cf1a67c2243ee303a9eec348eaebba94004931"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.328606 4858 generic.go:334] "Generic (PLEG): container finished" podID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerID="cddb36142f710de01a2a2604912a1de51c98b16778d69d3541cb2e91fd0be10f" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.328654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575cc76dd7-swvhx" event={"ID":"bda2acef-1ebf-4106-b75f-57d3c2a80758","Type":"ContainerDied","Data":"cddb36142f710de01a2a2604912a1de51c98b16778d69d3541cb2e91fd0be10f"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.329938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi04c7-account-delete-q782b" event={"ID":"15c7de97-b620-4e9b-8e17-27da546d6fb8","Type":"ContainerStarted","Data":"f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.340742 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi04c7-account-delete-q782b" secret="" err="secret \"galera-openstack-dockercfg-xjdh8\" not found" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.347497 4858 generic.go:334] "Generic (PLEG): container finished" podID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerID="428bc38b18119c4305d118eb828b9d35bf76f7f0732bf893cb1b34f626cfecdb" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.347553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1659016a-e2b7-4dbd-8ad1-56bef9995d64","Type":"ContainerDied","Data":"428bc38b18119c4305d118eb828b9d35bf76f7f0732bf893cb1b34f626cfecdb"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.357288 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerID="2616f6e010c2f47567c82c59233a83474d6307221bc0e3019310b01ca819c5e0" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.357356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70","Type":"ContainerDied","Data":"2616f6e010c2f47567c82c59233a83474d6307221bc0e3019310b01ca819c5e0"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.364685 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi04c7-account-delete-q782b" podStartSLOduration=6.3646656 podStartE2EDuration="6.3646656s" podCreationTimestamp="2025-11-22 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:39:47.356256071 +0000 UTC m=+8949.197679067" watchObservedRunningTime="2025-11-22 09:39:47.3646656 +0000 UTC m=+8949.206088596" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.368723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindera285-account-delete-9xdwn" event={"ID":"938520b5-d4e9-489e-8f92-642c144d69bc","Type":"ContainerDied","Data":"cd501d09e7356d65cb9ec1d14255af18626489d931bc6c57bdec78a6513b84bb"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.368828 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd501d09e7356d65cb9ec1d14255af18626489d931bc6c57bdec78a6513b84bb" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.369044 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cindera285-account-delete-9xdwn" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.384463 4858 generic.go:334] "Generic (PLEG): container finished" podID="4cf713f2-824f-4d23-bb3a-1b1f7ef99020" containerID="cd4ac53c7c037b114448c74d1fb5ca115e64028fa709acf595b4d3e033563293" exitCode=2 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.384576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cf713f2-824f-4d23-bb3a-1b1f7ef99020","Type":"ContainerDied","Data":"cd4ac53c7c037b114448c74d1fb5ca115e64028fa709acf595b4d3e033563293"} Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.386375 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.386521 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:47.886503869 +0000 UTC m=+8949.727926875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.405509 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron6b93-account-delete-n54tn" event={"ID":"ce9f6a1a-f6db-4db1-a07e-62baedc8fc60","Type":"ContainerDied","Data":"a0f025946911279abbd933d6c00ad76b80427f3bf95eeff98a000f0f2302c82b"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.405553 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0f025946911279abbd933d6c00ad76b80427f3bf95eeff98a000f0f2302c82b" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.405644 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron6b93-account-delete-n54tn" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.429451 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerID="065e51e4b82bfd09ef58eeccb1d741e51d5167ffe8e2bd644d87495b643cbfb5" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.429506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74977f9d76-k6dlw" event={"ID":"2e9999f0-5166-4fe0-9110-374b372ff6da","Type":"ContainerDied","Data":"065e51e4b82bfd09ef58eeccb1d741e51d5167ffe8e2bd644d87495b643cbfb5"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.445587 4858 generic.go:334] "Generic (PLEG): container finished" podID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerID="9f5e64397fcfbf30b8e57de5cd79bbaa5aa1cfb6dc41d738673c9552face9f4f" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.445655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865798754b-wklbv" event={"ID":"48b023fd-a47e-4fac-b75f-50e32cd8ed68","Type":"ContainerDied","Data":"9f5e64397fcfbf30b8e57de5cd79bbaa5aa1cfb6dc41d738673c9552face9f4f"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.448607 4858 generic.go:334] "Generic (PLEG): container finished" podID="879cb25d-5d39-48df-ac21-505127e58fd1" containerID="55ca57bd132c43b406a7e2f78d44ccc4ccfef51b3c54f7deb21f3fcdf315f42d" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.448695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" event={"ID":"879cb25d-5d39-48df-ac21-505127e58fd1","Type":"ContainerDied","Data":"55ca57bd132c43b406a7e2f78d44ccc4ccfef51b3c54f7deb21f3fcdf315f42d"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.450157 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c0f4278-ebde-458a-85b5-9f95824cee1a" containerID="b6215278e1bc20990b626e0be03145dc61b4dd24770816117ce4a8a7224c3d7a" exitCode=0 Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.450618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8911-account-delete-lsr9n" event={"ID":"1c0f4278-ebde-458a-85b5-9f95824cee1a","Type":"ContainerDied","Data":"b6215278e1bc20990b626e0be03145dc61b4dd24770816117ce4a8a7224c3d7a"} Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.450678 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.451095 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.451258 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.451601 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.550709 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214a7d5c-aa83-4ebc-b7dc-942ecdfdb759" path="/var/lib/kubelet/pods/214a7d5c-aa83-4ebc-b7dc-942ecdfdb759/volumes" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.551668 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" path="/var/lib/kubelet/pods/81d944bd-93c5-4863-96df-f83a4ff1db9b/volumes" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.552226 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c146df95-ce54-4862-86de-1f1612502264" path="/var/lib/kubelet/pods/c146df95-ce54-4862-86de-1f1612502264/volumes" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.553430 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced90ddf-eae9-45e2-ae0a-9306ed9873d7" path="/var/lib/kubelet/pods/ced90ddf-eae9-45e2-ae0a-9306ed9873d7/volumes" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.586185 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2" (OuterVolumeSpecName: "mysql-db") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "pvc-64561564-f9a1-481b-8d85-edbea98f10b2". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.592738 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-64561564-f9a1-481b-8d85-edbea98f10b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") on node \"crc\" " Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.694845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") pod \"c20945ad-d582-4bb8-a485-c6dbb78207fe\" (UID: \"c20945ad-d582-4bb8-a485-c6dbb78207fe\") " Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.779012 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-web-config" (OuterVolumeSpecName: "web-config") pod "0691c992-818e-46a2-9057-2f9548253076" (UID: "0691c992-818e-46a2-9057-2f9548253076"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.820846 4858 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0691c992-818e-46a2-9057-2f9548253076-web-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.824229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd68b47b-06e7-4e59-aad6-cae8c376573d" (UID: "cd68b47b-06e7-4e59-aad6-cae8c376573d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.824537 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.831247 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.831419 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-64561564-f9a1-481b-8d85-edbea98f10b2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2") on node "crc" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.892192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-config-data" (OuterVolumeSpecName: "config-data") pod "cd68b47b-06e7-4e59-aad6-cae8c376573d" (UID: "cd68b47b-06e7-4e59-aad6-cae8c376573d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.933888 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.934198 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") on node \"crc\" " Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.934218 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.934231 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.934244 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-64561564-f9a1-481b-8d85-edbea98f10b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64561564-f9a1-481b-8d85-edbea98f10b2\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:47 crc kubenswrapper[4858]: E1122 09:39:47.936457 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:48.936430649 +0000 UTC m=+8950.777853655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.937203 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.955766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-config-data" (OuterVolumeSpecName: "config-data") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:47 crc kubenswrapper[4858]: I1122 09:39:47.983296 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "11074703-ddac-49f9-b53d-5ec6c721af7d" (UID: "11074703-ddac-49f9-b53d-5ec6c721af7d"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.040576 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.040936 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.041005 4858 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11074703-ddac-49f9-b53d-5ec6c721af7d-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.072271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.081645 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "cd68b47b-06e7-4e59-aad6-cae8c376573d" (UID: "cd68b47b-06e7-4e59-aad6-cae8c376573d"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.087423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.100364 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.100568 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b") on node "crc" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.121304 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="galera" containerID="cri-o://cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e" gracePeriod=29 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.130552 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "cd68b47b-06e7-4e59-aad6-cae8c376573d" (UID: "cd68b47b-06e7-4e59-aad6-cae8c376573d"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.144195 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.144220 4858 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.144232 4858 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd68b47b-06e7-4e59-aad6-cae8c376573d-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.144241 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3c218f1-7fbc-4934-91b7-5bc7a999943b\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.144249 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.149595 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config" (OuterVolumeSpecName: "web-config") pod "c20945ad-d582-4bb8-a485-c6dbb78207fe" (UID: "c20945ad-d582-4bb8-a485-c6dbb78207fe"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.150739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9926527b-80a8-4a26-bc82-053200dbb73f" (UID: "9926527b-80a8-4a26-bc82-053200dbb73f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.247040 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9926527b-80a8-4a26-bc82-053200dbb73f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.247113 4858 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c20945ad-d582-4bb8-a485-c6dbb78207fe-web-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.348826 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.349356 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data podName:59060e41-09d2-4441-8563-5302fd77a52d nodeName:}" failed. No retries permitted until 2025-11-22 09:39:56.349339052 +0000 UTC m=+8958.190762058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data") pod "rabbitmq-server-0" (UID: "59060e41-09d2-4441-8563-5302fd77a52d") : configmap "rabbitmq-config-data" not found Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.365608 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.367101 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.377739 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.377821 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.472975 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh2e22-account-delete-nqn9k" podUID="64b43663-db69-4e42-a14e-85cc35b48dc3" containerName="mariadb-account-delete" containerID="cri-o://7e2fdf55afbc977857bfd741ee23ebdf1ec7fef9d5cc6c0b8e22d103a1bd9b4a" gracePeriod=30 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.485272 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/heataca4-account-delete-65j4m" secret="" err="secret \"galera-openstack-dockercfg-xjdh8\" not found" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.497943 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh2e22-account-delete-nqn9k" podStartSLOduration=7.497925438 podStartE2EDuration="7.497925438s" podCreationTimestamp="2025-11-22 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:39:48.492780653 +0000 UTC m=+8950.334203669" watchObservedRunningTime="2025-11-22 09:39:48.497925438 +0000 UTC m=+8950.339348444" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.514160 4858 generic.go:334] "Generic (PLEG): container finished" podID="c44b3c43-4aed-4726-a49e-693cd279bca6" containerID="3e9f60a9242f5ea9166f64aec3d772c195c831a12e40616f61d15e94761b65aa" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.526855 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heataca4-account-delete-65j4m" podStartSLOduration=7.526837763 podStartE2EDuration="7.526837763s" podCreationTimestamp="2025-11-22 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:39:48.521672888 +0000 UTC m=+8950.363095894" watchObservedRunningTime="2025-11-22 09:39:48.526837763 +0000 UTC m=+8950.368260769" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.538244 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0b969-account-delete-2lntr" secret="" err="secret \"galera-openstack-dockercfg-xjdh8\" not found" Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.557012 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.557074 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:49.05705666 +0000 UTC m=+8950.898479666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.565643 4858 generic.go:334] "Generic (PLEG): container finished" podID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerID="7c22c1647b976812b9a9e2e33c7532864b50fff449effa75e59831dd2b9c3c8f" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.565689 4858 generic.go:334] "Generic (PLEG): container finished" podID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerID="921a60c315076bfa09bfff124ab92deecdc0625f09d81b3bb232d4ef1e293e81" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.573575 4858 generic.go:334] "Generic (PLEG): container finished" podID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerID="57907d16e0311bc717a33ae1f359ab9a46d08e1abe4ca40d8893d8086ef774ac" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.587016 4858 generic.go:334] "Generic (PLEG): container finished" podID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerID="3967676f8e95adb5ee5014b410d8fa6ed22970b37607a556cdda336ed986c928" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.587044 4858 generic.go:334] "Generic (PLEG): container finished" podID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerID="87056ef5db220c131bb3ec20fed1d41cb562684d629666af50d9c09b8a77410d" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.588339 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee404aa4-d838-4368-9e25-6648adde67ee" containerID="8ed3ae5cedd53bd3a69e7e010ea65e7a6fc66b139c069cae1957b6aaf00b873d" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.591645 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4271125-14af-4748-97ad-ed766b2d26b8" containerID="0e9af0329f586f29a072f29f596f2dfaa4a85abfbc8d919d8bc5c0646f5a690e" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.593519 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell0b969-account-delete-2lntr" podStartSLOduration=7.593498306 podStartE2EDuration="7.593498306s" podCreationTimestamp="2025-11-22 09:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:39:48.573851748 +0000 UTC m=+8950.415274754" watchObservedRunningTime="2025-11-22 09:39:48.593498306 +0000 UTC m=+8950.434921312" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.596278 4858 generic.go:334] "Generic (PLEG): container finished" podID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerID="2a569e7aef5c1478654a43f23ff834b089ab7b81d90062f8bc434d0602c00539" exitCode=0 Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.601490 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi04c7-account-delete-q782b" secret="" err="secret \"galera-openstack-dockercfg-xjdh8\" not found" Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.658267 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.658388 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:49.158373653 +0000 UTC m=+8950.999796659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.658510 4858 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.658578 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data podName:e8384e15-b249-44a6-8d35-8a2066b3da7b nodeName:}" failed. No retries permitted until 2025-11-22 09:39:56.658542338 +0000 UTC m=+8958.499965344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.658737 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.148:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.695142 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.697053 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.699589 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.699647 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="419367a7-1838-4692-b6fc-f266985765d7" containerName="nova-scheduler-scheduler" Nov 22 09:39:48 crc kubenswrapper[4858]: E1122 09:39:48.788562 4858 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.253s" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0da6e158-7f6d-434b-bd4a-9a902a5879d9","Type":"ContainerDied","Data":"94b50848184d79c77f55fd7a85d31fe058007eabd302c1a832cbb3d700404f40"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788658 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b50848184d79c77f55fd7a85d31fe058007eabd302c1a832cbb3d700404f40" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh2e22-account-delete-nqn9k" event={"ID":"64b43663-db69-4e42-a14e-85cc35b48dc3","Type":"ContainerStarted","Data":"7e2fdf55afbc977857bfd741ee23ebdf1ec7fef9d5cc6c0b8e22d103a1bd9b4a"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heataca4-account-delete-65j4m" event={"ID":"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6","Type":"ContainerStarted","Data":"071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575cc76dd7-swvhx" event={"ID":"bda2acef-1ebf-4106-b75f-57d3c2a80758","Type":"ContainerDied","Data":"9722c974109e8edf322a2589a05375dfe912699934b0c20542e092fb4a849ef2"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788710 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9722c974109e8edf322a2589a05375dfe912699934b0c20542e092fb4a849ef2" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865798754b-wklbv" event={"ID":"48b023fd-a47e-4fac-b75f-50e32cd8ed68","Type":"ContainerDied","Data":"af9ef32d29342d4f02496885c3bf267af0b98034402891f4e91554bd23ca7ead"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788726 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9ef32d29342d4f02496885c3bf267af0b98034402891f4e91554bd23ca7ead" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788736 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" event={"ID":"c44b3c43-4aed-4726-a49e-693cd279bca6","Type":"ContainerDied","Data":"3e9f60a9242f5ea9166f64aec3d772c195c831a12e40616f61d15e94761b65aa"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0b969-account-delete-2lntr" event={"ID":"eb9be543-7566-4423-b4ed-5d9596cf21a4","Type":"ContainerStarted","Data":"3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerDied","Data":"7c22c1647b976812b9a9e2e33c7532864b50fff449effa75e59831dd2b9c3c8f"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerDied","Data":"921a60c315076bfa09bfff124ab92deecdc0625f09d81b3bb232d4ef1e293e81"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3d42a8-0810-462c-abd3-73b770f8fb03","Type":"ContainerDied","Data":"57907d16e0311bc717a33ae1f359ab9a46d08e1abe4ca40d8893d8086ef774ac"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74977f9d76-k6dlw" event={"ID":"2e9999f0-5166-4fe0-9110-374b372ff6da","Type":"ContainerDied","Data":"e5e8c52b8311cc93960187add2480971270ca82895f4ba0f72135561b2b6652a"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788797 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e8c52b8311cc93960187add2480971270ca82895f4ba0f72135561b2b6652a" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerDied","Data":"3967676f8e95adb5ee5014b410d8fa6ed22970b37607a556cdda336ed986c928"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerDied","Data":"87056ef5db220c131bb3ec20fed1d41cb562684d629666af50d9c09b8a77410d"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788823 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c86c95b8-8h6xv" event={"ID":"ee404aa4-d838-4368-9e25-6648adde67ee","Type":"ContainerDied","Data":"8ed3ae5cedd53bd3a69e7e010ea65e7a6fc66b139c069cae1957b6aaf00b873d"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b4271125-14af-4748-97ad-ed766b2d26b8","Type":"ContainerDied","Data":"0e9af0329f586f29a072f29f596f2dfaa4a85abfbc8d919d8bc5c0646f5a690e"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e952720-9083-48e0-96d1-54f1cfacfbf9","Type":"ContainerDied","Data":"2a569e7aef5c1478654a43f23ff834b089ab7b81d90062f8bc434d0602c00539"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1659016a-e2b7-4dbd-8ad1-56bef9995d64","Type":"ContainerDied","Data":"f6b7add04a797235d39d3adaefa419ff364a9af11f9a24949843784e159f7c9d"} Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.788867 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b7add04a797235d39d3adaefa419ff364a9af11f9a24949843784e159f7c9d" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.790157 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.798525 4858 scope.go:117] "RemoveContainer" containerID="47732564dcbee4396779be68097333bf6ceab57ebc135dff5097a79c851b70b2" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.805889 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.843546 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.863735 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b023fd-a47e-4fac-b75f-50e32cd8ed68-logs\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.863805 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data-custom\") pod \"bda2acef-1ebf-4106-b75f-57d3c2a80758\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.863854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data\") pod \"bda2acef-1ebf-4106-b75f-57d3c2a80758\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.863926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data-custom\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-combined-ca-bundle\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864065 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b76tj\" (UniqueName: \"kubernetes.io/projected/48b023fd-a47e-4fac-b75f-50e32cd8ed68-kube-api-access-b76tj\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864128 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-combined-ca-bundle\") pod \"bda2acef-1ebf-4106-b75f-57d3c2a80758\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864175 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-internal-tls-certs\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dl56\" (UniqueName: \"kubernetes.io/projected/bda2acef-1ebf-4106-b75f-57d3c2a80758-kube-api-access-6dl56\") pod \"bda2acef-1ebf-4106-b75f-57d3c2a80758\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda2acef-1ebf-4106-b75f-57d3c2a80758-logs\") pod \"bda2acef-1ebf-4106-b75f-57d3c2a80758\" (UID: \"bda2acef-1ebf-4106-b75f-57d3c2a80758\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.864482 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-public-tls-certs\") pod \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\" (UID: \"48b023fd-a47e-4fac-b75f-50e32cd8ed68\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.876389 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b023fd-a47e-4fac-b75f-50e32cd8ed68-kube-api-access-b76tj" (OuterVolumeSpecName: "kube-api-access-b76tj") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "kube-api-access-b76tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.877763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b023fd-a47e-4fac-b75f-50e32cd8ed68-logs" (OuterVolumeSpecName: "logs") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.878053 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.878800 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b76tj\" (UniqueName: \"kubernetes.io/projected/48b023fd-a47e-4fac-b75f-50e32cd8ed68-kube-api-access-b76tj\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.878935 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48b023fd-a47e-4fac-b75f-50e32cd8ed68-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.882424 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bda2acef-1ebf-4106-b75f-57d3c2a80758" (UID: "bda2acef-1ebf-4106-b75f-57d3c2a80758"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.882771 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.887069 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.892674 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda2acef-1ebf-4106-b75f-57d3c2a80758-kube-api-access-6dl56" (OuterVolumeSpecName: "kube-api-access-6dl56") pod "bda2acef-1ebf-4106-b75f-57d3c2a80758" (UID: "bda2acef-1ebf-4106-b75f-57d3c2a80758"). InnerVolumeSpecName "kube-api-access-6dl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.904379 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.904437 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.904727 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda2acef-1ebf-4106-b75f-57d3c2a80758-logs" (OuterVolumeSpecName: "logs") pod "bda2acef-1ebf-4106-b75f-57d3c2a80758" (UID: "bda2acef-1ebf-4106-b75f-57d3c2a80758"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.916837 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.932917 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.939519 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.948869 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.951207 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.957620 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.968706 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.987511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjqrl\" (UniqueName: \"kubernetes.io/projected/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-api-access-qjqrl\") pod \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.987719 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-public-tls-certs\") pod \"dc3d42a8-0810-462c-abd3-73b770f8fb03\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.987835 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-scripts\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.987970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.988066 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-combined-ca-bundle\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.988163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zskhl\" (UniqueName: \"kubernetes.io/projected/0da6e158-7f6d-434b-bd4a-9a902a5879d9-kube-api-access-zskhl\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.988280 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-combined-ca-bundle\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.991701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-public-tls-certs\") pod \"f559e642-5710-41ad-b508-a76cf28d62ca\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.991859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-combined-ca-bundle\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.991972 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-internal-tls-certs\") pod \"f559e642-5710-41ad-b508-a76cf28d62ca\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.992207 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-combined-ca-bundle\") pod \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.992260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879cb25d-5d39-48df-ac21-505127e58fd1-logs\") pod \"879cb25d-5d39-48df-ac21-505127e58fd1\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.992287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt6s9\" (UniqueName: \"kubernetes.io/projected/1659016a-e2b7-4dbd-8ad1-56bef9995d64-kube-api-access-mt6s9\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.991178 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-api-access-qjqrl" (OuterVolumeSpecName: "kube-api-access-qjqrl") pod "4cf713f2-824f-4d23-bb3a-1b1f7ef99020" (UID: "4cf713f2-824f-4d23-bb3a-1b1f7ef99020"). InnerVolumeSpecName "kube-api-access-qjqrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.992859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-scripts\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.992978 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-certs\") pod \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-combined-ca-bundle\") pod \"f559e642-5710-41ad-b508-a76cf28d62ca\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data\") pod \"f559e642-5710-41ad-b508-a76cf28d62ca\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993271 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-logs\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data\") pod \"879cb25d-5d39-48df-ac21-505127e58fd1\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993482 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-internal-tls-certs\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjff4\" (UniqueName: \"kubernetes.io/projected/dc3d42a8-0810-462c-abd3-73b770f8fb03-kube-api-access-mjff4\") pod \"dc3d42a8-0810-462c-abd3-73b770f8fb03\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.993706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data-custom\") pod \"879cb25d-5d39-48df-ac21-505127e58fd1\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.995743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptzgw\" (UniqueName: \"kubernetes.io/projected/879cb25d-5d39-48df-ac21-505127e58fd1-kube-api-access-ptzgw\") pod \"879cb25d-5d39-48df-ac21-505127e58fd1\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.995897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-public-tls-certs\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996007 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data-custom\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996128 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9999f0-5166-4fe0-9110-374b372ff6da-logs\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996490 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0da6e158-7f6d-434b-bd4a-9a902a5879d9-logs\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-internal-tls-certs\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-internal-tls-certs\") pod \"dc3d42a8-0810-462c-abd3-73b770f8fb03\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996834 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7fh2\" (UniqueName: \"kubernetes.io/projected/2e9999f0-5166-4fe0-9110-374b372ff6da-kube-api-access-c7fh2\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.996935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-combined-ca-bundle\") pod \"879cb25d-5d39-48df-ac21-505127e58fd1\" (UID: \"879cb25d-5d39-48df-ac21-505127e58fd1\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.997069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-combined-ca-bundle\") pod \"dc3d42a8-0810-462c-abd3-73b770f8fb03\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.997180 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-public-tls-certs\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.997289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-public-tls-certs\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.997828 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-config-data\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.997990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-scripts\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-config-data\") pod \"dc3d42a8-0810-462c-abd3-73b770f8fb03\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-config-data\") pod \"2e9999f0-5166-4fe0-9110-374b372ff6da\" (UID: \"2e9999f0-5166-4fe0-9110-374b372ff6da\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzhc8\" (UniqueName: \"kubernetes.io/projected/f559e642-5710-41ad-b508-a76cf28d62ca-kube-api-access-qzhc8\") pod \"f559e642-5710-41ad-b508-a76cf28d62ca\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998570 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d42a8-0810-462c-abd3-73b770f8fb03-logs\") pod \"dc3d42a8-0810-462c-abd3-73b770f8fb03\" (UID: \"dc3d42a8-0810-462c-abd3-73b770f8fb03\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998709 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-httpd-run\") pod \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\" (UID: \"1659016a-e2b7-4dbd-8ad1-56bef9995d64\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data-custom\") pod \"f559e642-5710-41ad-b508-a76cf28d62ca\" (UID: \"f559e642-5710-41ad-b508-a76cf28d62ca\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.998933 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da6e158-7f6d-434b-bd4a-9a902a5879d9-etc-machine-id\") pod \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\" (UID: \"0da6e158-7f6d-434b-bd4a-9a902a5879d9\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.999043 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-config\") pod \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\" (UID: \"4cf713f2-824f-4d23-bb3a-1b1f7ef99020\") " Nov 22 09:39:48 crc kubenswrapper[4858]: I1122 09:39:48.999926 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dl56\" (UniqueName: \"kubernetes.io/projected/bda2acef-1ebf-4106-b75f-57d3c2a80758-kube-api-access-6dl56\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.000043 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda2acef-1ebf-4106-b75f-57d3c2a80758-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.000138 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjqrl\" (UniqueName: \"kubernetes.io/projected/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-api-access-qjqrl\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.000228 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.000432 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.000564 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: E1122 09:39:49.000729 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:49 crc kubenswrapper[4858]: E1122 09:39:49.000852 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:51.000834483 +0000 UTC m=+8952.842257549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.007503 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.009863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3d42a8-0810-462c-abd3-73b770f8fb03-logs" (OuterVolumeSpecName: "logs") pod "dc3d42a8-0810-462c-abd3-73b770f8fb03" (UID: "dc3d42a8-0810-462c-abd3-73b770f8fb03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.010288 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.010741 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.012644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0da6e158-7f6d-434b-bd4a-9a902a5879d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.018760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1659016a-e2b7-4dbd-8ad1-56bef9995d64-kube-api-access-mt6s9" (OuterVolumeSpecName: "kube-api-access-mt6s9") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "kube-api-access-mt6s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.020964 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.021789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879cb25d-5d39-48df-ac21-505127e58fd1-logs" (OuterVolumeSpecName: "logs") pod "879cb25d-5d39-48df-ac21-505127e58fd1" (UID: "879cb25d-5d39-48df-ac21-505127e58fd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.022847 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-scripts" (OuterVolumeSpecName: "scripts") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.026374 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.028158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da6e158-7f6d-434b-bd4a-9a902a5879d9-logs" (OuterVolumeSpecName: "logs") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.032170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-logs" (OuterVolumeSpecName: "logs") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.044185 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.044596 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-scripts" (OuterVolumeSpecName: "scripts") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.044690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da6e158-7f6d-434b-bd4a-9a902a5879d9-kube-api-access-zskhl" (OuterVolumeSpecName: "kube-api-access-zskhl") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "kube-api-access-zskhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.045627 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.049455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f559e642-5710-41ad-b508-a76cf28d62ca" (UID: "f559e642-5710-41ad-b508-a76cf28d62ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.056452 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.059668 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559e642-5710-41ad-b508-a76cf28d62ca-kube-api-access-qzhc8" (OuterVolumeSpecName: "kube-api-access-qzhc8") pod "f559e642-5710-41ad-b508-a76cf28d62ca" (UID: "f559e642-5710-41ad-b508-a76cf28d62ca"). InnerVolumeSpecName "kube-api-access-qzhc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.065523 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.068033 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.077197 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9999f0-5166-4fe0-9110-374b372ff6da-logs" (OuterVolumeSpecName: "logs") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.092736 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-combined-ca-bundle\") pod \"1e952720-9083-48e0-96d1-54f1cfacfbf9\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102372 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-logs\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102399 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-nova-metadata-tls-certs\") pod \"1e952720-9083-48e0-96d1-54f1cfacfbf9\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-combined-ca-bundle\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102629 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e952720-9083-48e0-96d1-54f1cfacfbf9-logs\") pod \"1e952720-9083-48e0-96d1-54f1cfacfbf9\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-scripts\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102686 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfbh9\" (UniqueName: \"kubernetes.io/projected/7c014721-aa5e-4b1e-93b7-36b6832df6c6-kube-api-access-hfbh9\") pod \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102720 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqn9n\" (UniqueName: \"kubernetes.io/projected/1e952720-9083-48e0-96d1-54f1cfacfbf9-kube-api-access-tqn9n\") pod \"1e952720-9083-48e0-96d1-54f1cfacfbf9\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102745 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-config-data\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102765 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-config-data\") pod \"1e952720-9083-48e0-96d1-54f1cfacfbf9\" (UID: \"1e952720-9083-48e0-96d1-54f1cfacfbf9\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-httpd-run\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-internal-tls-certs\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.102864 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w962\" (UniqueName: \"kubernetes.io/projected/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-kube-api-access-8w962\") pod \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\" (UID: \"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.103000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c014721-aa5e-4b1e-93b7-36b6832df6c6-operator-scripts\") pod \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\" (UID: \"7c014721-aa5e-4b1e-93b7-36b6832df6c6\") " Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104091 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104120 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zskhl\" (UniqueName: \"kubernetes.io/projected/0da6e158-7f6d-434b-bd4a-9a902a5879d9-kube-api-access-zskhl\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104135 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/879cb25d-5d39-48df-ac21-505127e58fd1-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104147 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt6s9\" (UniqueName: \"kubernetes.io/projected/1659016a-e2b7-4dbd-8ad1-56bef9995d64-kube-api-access-mt6s9\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104159 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104169 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104180 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e9999f0-5166-4fe0-9110-374b372ff6da-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104190 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0da6e158-7f6d-434b-bd4a-9a902a5879d9-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104201 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzhc8\" (UniqueName: \"kubernetes.io/projected/f559e642-5710-41ad-b508-a76cf28d62ca-kube-api-access-qzhc8\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104243 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3d42a8-0810-462c-abd3-73b770f8fb03-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104254 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1659016a-e2b7-4dbd-8ad1-56bef9995d64-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: E1122 09:39:49.104255 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104269 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.104279 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0da6e158-7f6d-434b-bd4a-9a902a5879d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:49 crc kubenswrapper[4858]: E1122 09:39:49.104308 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:50.104290884 +0000 UTC m=+8951.945713960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.105066 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c014721-aa5e-4b1e-93b7-36b6832df6c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c014721-aa5e-4b1e-93b7-36b6832df6c6" (UID: "7c014721-aa5e-4b1e-93b7-36b6832df6c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.106120 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.106777 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e952720-9083-48e0-96d1-54f1cfacfbf9-logs" (OuterVolumeSpecName: "logs") pod "1e952720-9083-48e0-96d1-54f1cfacfbf9" (UID: "1e952720-9083-48e0-96d1-54f1cfacfbf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.107265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-logs" (OuterVolumeSpecName: "logs") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.114519 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-scripts" (OuterVolumeSpecName: "scripts") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:49 crc kubenswrapper[4858]: I1122 09:39:49.115660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c014721-aa5e-4b1e-93b7-36b6832df6c6-kube-api-access-hfbh9" (OuterVolumeSpecName: "kube-api-access-hfbh9") pod "7c014721-aa5e-4b1e-93b7-36b6832df6c6" (UID: "7c014721-aa5e-4b1e-93b7-36b6832df6c6"). InnerVolumeSpecName "kube-api-access-hfbh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.117138 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-579546c64d-fkr76"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.128399 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.128523 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-579546c64d-fkr76"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.139642 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.141854 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.141889 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerName="nova-cell0-conductor-conductor" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.146991 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9999f0-5166-4fe0-9110-374b372ff6da-kube-api-access-c7fh2" (OuterVolumeSpecName: "kube-api-access-c7fh2") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "kube-api-access-c7fh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.147041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879cb25d-5d39-48df-ac21-505127e58fd1-kube-api-access-ptzgw" (OuterVolumeSpecName: "kube-api-access-ptzgw") pod "879cb25d-5d39-48df-ac21-505127e58fd1" (UID: "879cb25d-5d39-48df-ac21-505127e58fd1"). InnerVolumeSpecName "kube-api-access-ptzgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.147057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3d42a8-0810-462c-abd3-73b770f8fb03-kube-api-access-mjff4" (OuterVolumeSpecName: "kube-api-access-mjff4") pod "dc3d42a8-0810-462c-abd3-73b770f8fb03" (UID: "dc3d42a8-0810-462c-abd3-73b770f8fb03"). InnerVolumeSpecName "kube-api-access-mjff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.159528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "879cb25d-5d39-48df-ac21-505127e58fd1" (UID: "879cb25d-5d39-48df-ac21-505127e58fd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.150936 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.194556 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-scripts" (OuterVolumeSpecName: "scripts") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.196895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e952720-9083-48e0-96d1-54f1cfacfbf9-kube-api-access-tqn9n" (OuterVolumeSpecName: "kube-api-access-tqn9n") pod "1e952720-9083-48e0-96d1-54f1cfacfbf9" (UID: "1e952720-9083-48e0-96d1-54f1cfacfbf9"). InnerVolumeSpecName "kube-api-access-tqn9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.202490 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-kube-api-access-8w962" (OuterVolumeSpecName: "kube-api-access-8w962") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "kube-api-access-8w962". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-memcached-tls-certs\") pod \"b4271125-14af-4748-97ad-ed766b2d26b8\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data\") pod \"c44b3c43-4aed-4726-a49e-693cd279bca6\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205346 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-combined-ca-bundle\") pod \"555e309c-8c41-4ac1-8eca-60e203f92e4e\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvntz\" (UniqueName: \"kubernetes.io/projected/b4271125-14af-4748-97ad-ed766b2d26b8-kube-api-access-mvntz\") pod \"b4271125-14af-4748-97ad-ed766b2d26b8\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-scripts\") pod \"555e309c-8c41-4ac1-8eca-60e203f92e4e\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205503 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2fvb\" (UniqueName: \"kubernetes.io/projected/555e309c-8c41-4ac1-8eca-60e203f92e4e-kube-api-access-d2fvb\") pod \"555e309c-8c41-4ac1-8eca-60e203f92e4e\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-combined-ca-bundle\") pod \"c44b3c43-4aed-4726-a49e-693cd279bca6\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-config-data\") pod \"555e309c-8c41-4ac1-8eca-60e203f92e4e\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205618 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-public-tls-certs\") pod \"555e309c-8c41-4ac1-8eca-60e203f92e4e\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205634 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-internal-tls-certs\") pod \"c44b3c43-4aed-4726-a49e-693cd279bca6\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205651 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbc9f\" (UniqueName: \"kubernetes.io/projected/c44b3c43-4aed-4726-a49e-693cd279bca6-kube-api-access-lbc9f\") pod \"c44b3c43-4aed-4726-a49e-693cd279bca6\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-combined-ca-bundle\") pod \"b4271125-14af-4748-97ad-ed766b2d26b8\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-public-tls-certs\") pod \"c44b3c43-4aed-4726-a49e-693cd279bca6\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data-custom\") pod \"c44b3c43-4aed-4726-a49e-693cd279bca6\" (UID: \"c44b3c43-4aed-4726-a49e-693cd279bca6\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205814 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-config-data\") pod \"b4271125-14af-4748-97ad-ed766b2d26b8\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205835 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-kolla-config\") pod \"b4271125-14af-4748-97ad-ed766b2d26b8\" (UID: \"b4271125-14af-4748-97ad-ed766b2d26b8\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.205872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-internal-tls-certs\") pod \"555e309c-8c41-4ac1-8eca-60e203f92e4e\" (UID: \"555e309c-8c41-4ac1-8eca-60e203f92e4e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206240 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c014721-aa5e-4b1e-93b7-36b6832df6c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206252 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjff4\" (UniqueName: \"kubernetes.io/projected/dc3d42a8-0810-462c-abd3-73b770f8fb03-kube-api-access-mjff4\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206263 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206272 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptzgw\" (UniqueName: \"kubernetes.io/projected/879cb25d-5d39-48df-ac21-505127e58fd1-kube-api-access-ptzgw\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206281 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206289 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7fh2\" (UniqueName: \"kubernetes.io/projected/2e9999f0-5166-4fe0-9110-374b372ff6da-kube-api-access-c7fh2\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206298 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206306 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206326 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e952720-9083-48e0-96d1-54f1cfacfbf9-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206335 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206343 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfbh9\" (UniqueName: \"kubernetes.io/projected/7c014721-aa5e-4b1e-93b7-36b6832df6c6-kube-api-access-hfbh9\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206351 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqn9n\" (UniqueName: \"kubernetes.io/projected/1e952720-9083-48e0-96d1-54f1cfacfbf9-kube-api-access-tqn9n\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206359 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.206367 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w962\" (UniqueName: \"kubernetes.io/projected/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-kube-api-access-8w962\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.206452 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.206497 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:50.206483533 +0000 UTC m=+8952.047906539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.216236 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b4271125-14af-4748-97ad-ed766b2d26b8" (UID: "b4271125-14af-4748-97ad-ed766b2d26b8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.216744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-config-data" (OuterVolumeSpecName: "config-data") pod "b4271125-14af-4748-97ad-ed766b2d26b8" (UID: "b4271125-14af-4748-97ad-ed766b2d26b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.243244 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4271125-14af-4748-97ad-ed766b2d26b8-kube-api-access-mvntz" (OuterVolumeSpecName: "kube-api-access-mvntz") pod "b4271125-14af-4748-97ad-ed766b2d26b8" (UID: "b4271125-14af-4748-97ad-ed766b2d26b8"). InnerVolumeSpecName "kube-api-access-mvntz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.249745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555e309c-8c41-4ac1-8eca-60e203f92e4e-kube-api-access-d2fvb" (OuterVolumeSpecName: "kube-api-access-d2fvb") pod "555e309c-8c41-4ac1-8eca-60e203f92e4e" (UID: "555e309c-8c41-4ac1-8eca-60e203f92e4e"). InnerVolumeSpecName "kube-api-access-d2fvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.267420 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c44b3c43-4aed-4726-a49e-693cd279bca6" (UID: "c44b3c43-4aed-4726-a49e-693cd279bca6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.272536 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-scripts" (OuterVolumeSpecName: "scripts") pod "555e309c-8c41-4ac1-8eca-60e203f92e4e" (UID: "555e309c-8c41-4ac1-8eca-60e203f92e4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.272580 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bda2acef-1ebf-4106-b75f-57d3c2a80758" (UID: "bda2acef-1ebf-4106-b75f-57d3c2a80758"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.285199 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44b3c43-4aed-4726-a49e-693cd279bca6-kube-api-access-lbc9f" (OuterVolumeSpecName: "kube-api-access-lbc9f") pod "c44b3c43-4aed-4726-a49e-693cd279bca6" (UID: "c44b3c43-4aed-4726-a49e-693cd279bca6"). InnerVolumeSpecName "kube-api-access-lbc9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308536 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308565 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308574 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308582 4858 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b4271125-14af-4748-97ad-ed766b2d26b8-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308591 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvntz\" (UniqueName: \"kubernetes.io/projected/b4271125-14af-4748-97ad-ed766b2d26b8-kube-api-access-mvntz\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308601 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308609 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2fvb\" (UniqueName: \"kubernetes.io/projected/555e309c-8c41-4ac1-8eca-60e203f92e4e-kube-api-access-d2fvb\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.308617 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbc9f\" (UniqueName: \"kubernetes.io/projected/c44b3c43-4aed-4726-a49e-693cd279bca6-kube-api-access-lbc9f\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.367546 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.370228 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.371667 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:49.371722 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerName="nova-cell1-conductor-conductor" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.385561 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.464782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.511335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0f4278-ebde-458a-85b5-9f95824cee1a-operator-scripts\") pod \"1c0f4278-ebde-458a-85b5-9f95824cee1a\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.511788 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/084a15f9-e534-46ad-b38a-17eeb1b6589e-operator-scripts\") pod \"084a15f9-e534-46ad-b38a-17eeb1b6589e\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.512076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvck2\" (UniqueName: \"kubernetes.io/projected/084a15f9-e534-46ad-b38a-17eeb1b6589e-kube-api-access-fvck2\") pod \"084a15f9-e534-46ad-b38a-17eeb1b6589e\" (UID: \"084a15f9-e534-46ad-b38a-17eeb1b6589e\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.512133 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxd8z\" (UniqueName: \"kubernetes.io/projected/1c0f4278-ebde-458a-85b5-9f95824cee1a-kube-api-access-nxd8z\") pod \"1c0f4278-ebde-458a-85b5-9f95824cee1a\" (UID: \"1c0f4278-ebde-458a-85b5-9f95824cee1a\") " Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.512720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c0f4278-ebde-458a-85b5-9f95824cee1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c0f4278-ebde-458a-85b5-9f95824cee1a" (UID: "1c0f4278-ebde-458a-85b5-9f95824cee1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.513790 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084a15f9-e534-46ad-b38a-17eeb1b6589e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "084a15f9-e534-46ad-b38a-17eeb1b6589e" (UID: "084a15f9-e534-46ad-b38a-17eeb1b6589e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.533682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084a15f9-e534-46ad-b38a-17eeb1b6589e-kube-api-access-fvck2" (OuterVolumeSpecName: "kube-api-access-fvck2") pod "084a15f9-e534-46ad-b38a-17eeb1b6589e" (UID: "084a15f9-e534-46ad-b38a-17eeb1b6589e"). InnerVolumeSpecName "kube-api-access-fvck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.538239 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0f4278-ebde-458a-85b5-9f95824cee1a-kube-api-access-nxd8z" (OuterVolumeSpecName: "kube-api-access-nxd8z") pod "1c0f4278-ebde-458a-85b5-9f95824cee1a" (UID: "1c0f4278-ebde-458a-85b5-9f95824cee1a"). InnerVolumeSpecName "kube-api-access-nxd8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.567406 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0691c992-818e-46a2-9057-2f9548253076" path="/var/lib/kubelet/pods/0691c992-818e-46a2-9057-2f9548253076/volumes" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.568573 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" path="/var/lib/kubelet/pods/11074703-ddac-49f9-b53d-5ec6c721af7d/volumes" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.569718 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" path="/var/lib/kubelet/pods/9926527b-80a8-4a26-bc82-053200dbb73f/volumes" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.571268 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" path="/var/lib/kubelet/pods/c20945ad-d582-4bb8-a485-c6dbb78207fe/volumes" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.583515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.586422 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd68b47b-06e7-4e59-aad6-cae8c376573d" path="/var/lib/kubelet/pods/cd68b47b-06e7-4e59-aad6-cae8c376573d/volumes" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.617903 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0f4278-ebde-458a-85b5-9f95824cee1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.617928 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/084a15f9-e534-46ad-b38a-17eeb1b6589e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.617940 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.617973 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvck2\" (UniqueName: \"kubernetes.io/projected/084a15f9-e534-46ad-b38a-17eeb1b6589e-kube-api-access-fvck2\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.617989 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxd8z\" (UniqueName: \"kubernetes.io/projected/1c0f4278-ebde-458a-85b5-9f95824cee1a-kube-api-access-nxd8z\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.624413 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.632971 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.645339 4858 generic.go:334] "Generic (PLEG): container finished" podID="59060e41-09d2-4441-8563-5302fd77a52d" containerID="48243763ff91a842163928192fc2ea246f302325792033ccd2427519d16f31b0" exitCode=0 Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.648003 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.651459 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement8911-account-delete-lsr9n" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.673459 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c44b3c43-4aed-4726-a49e-693cd279bca6" (UID: "c44b3c43-4aed-4726-a49e-693cd279bca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.682202 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.716747 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.721100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.720691 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.726804 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3d42a8-0810-462c-abd3-73b770f8fb03" (UID: "dc3d42a8-0810-462c-abd3-73b770f8fb03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.731360 4858 generic.go:334] "Generic (PLEG): container finished" podID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerID="7a1b9aa9bf7fdcfe3b6dd842717d88716652a749a754b92b43ad5226f5e6ec33" exitCode=0 Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.734958 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.739016 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.740299 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.741974 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican04c4-account-delete-8pfbj" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.746975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.751459 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance034d-account-delete-lrkfs" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.752936 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.752982 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865798754b-wklbv" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.753148 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-575cc76dd7-swvhx" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.753198 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.753337 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-554bc84945-x99pt" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.753353 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74977f9d76-k6dlw" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.753990 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0b969-account-delete-2lntr" secret="" err="secret \"galera-openstack-dockercfg-xjdh8\" not found" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.756151 4858 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/heataca4-account-delete-65j4m" secret="" err="secret \"galera-openstack-dockercfg-xjdh8\" not found" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.760477 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.766205 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc3d42a8-0810-462c-abd3-73b770f8fb03" (UID: "dc3d42a8-0810-462c-abd3-73b770f8fb03"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.781778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "879cb25d-5d39-48df-ac21-505127e58fd1" (UID: "879cb25d-5d39-48df-ac21-505127e58fd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.788953 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830833 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830859 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830868 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830877 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830886 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830894 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.830920 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.832593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cf713f2-824f-4d23-bb3a-1b1f7ef99020" (UID: "4cf713f2-824f-4d23-bb3a-1b1f7ef99020"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.845803 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f559e642-5710-41ad-b508-a76cf28d62ca" (UID: "f559e642-5710-41ad-b508-a76cf28d62ca"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.848542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "4cf713f2-824f-4d23-bb3a-1b1f7ef99020" (UID: "4cf713f2-824f-4d23-bb3a-1b1f7ef99020"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.849388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e952720-9083-48e0-96d1-54f1cfacfbf9" (UID: "1e952720-9083-48e0-96d1-54f1cfacfbf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.851235 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.864098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-config-data" (OuterVolumeSpecName: "config-data") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.873235 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f559e642-5710-41ad-b508-a76cf28d62ca" (UID: "f559e642-5710-41ad-b508-a76cf28d62ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.876244 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.878181 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "555e309c-8c41-4ac1-8eca-60e203f92e4e" (UID: "555e309c-8c41-4ac1-8eca-60e203f92e4e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.883133 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data" (OuterVolumeSpecName: "config-data") pod "48b023fd-a47e-4fac-b75f-50e32cd8ed68" (UID: "48b023fd-a47e-4fac-b75f-50e32cd8ed68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.884137 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" (UID: "d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.900188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4271125-14af-4748-97ad-ed766b2d26b8" (UID: "b4271125-14af-4748-97ad-ed766b2d26b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.911347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-config-data" (OuterVolumeSpecName: "config-data") pod "dc3d42a8-0810-462c-abd3-73b770f8fb03" (UID: "dc3d42a8-0810-462c-abd3-73b770f8fb03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.916081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-config-data" (OuterVolumeSpecName: "config-data") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.918499 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1659016a-e2b7-4dbd-8ad1-56bef9995d64" (UID: "1659016a-e2b7-4dbd-8ad1-56bef9995d64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.918665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.920971 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data" (OuterVolumeSpecName: "config-data") pod "bda2acef-1ebf-4106-b75f-57d3c2a80758" (UID: "bda2acef-1ebf-4106-b75f-57d3c2a80758"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932691 4858 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932723 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932737 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932749 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932761 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda2acef-1ebf-4106-b75f-57d3c2a80758-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932772 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932783 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932795 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932807 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932821 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932833 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932844 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932855 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932865 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932877 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b023fd-a47e-4fac-b75f-50e32cd8ed68-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932888 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1659016a-e2b7-4dbd-8ad1-56bef9995d64-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.932900 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.936750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data" (OuterVolumeSpecName: "config-data") pod "c44b3c43-4aed-4726-a49e-693cd279bca6" (UID: "c44b3c43-4aed-4726-a49e-693cd279bca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.945124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-config-data" (OuterVolumeSpecName: "config-data") pod "1e952720-9083-48e0-96d1-54f1cfacfbf9" (UID: "1e952720-9083-48e0-96d1-54f1cfacfbf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.947715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data" (OuterVolumeSpecName: "config-data") pod "0da6e158-7f6d-434b-bd4a-9a902a5879d9" (UID: "0da6e158-7f6d-434b-bd4a-9a902a5879d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.956768 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f559e642-5710-41ad-b508-a76cf28d62ca" (UID: "f559e642-5710-41ad-b508-a76cf28d62ca"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.956794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "555e309c-8c41-4ac1-8eca-60e203f92e4e" (UID: "555e309c-8c41-4ac1-8eca-60e203f92e4e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.959072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "4cf713f2-824f-4d23-bb3a-1b1f7ef99020" (UID: "4cf713f2-824f-4d23-bb3a-1b1f7ef99020"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.959766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data" (OuterVolumeSpecName: "config-data") pod "879cb25d-5d39-48df-ac21-505127e58fd1" (UID: "879cb25d-5d39-48df-ac21-505127e58fd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.960752 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc3d42a8-0810-462c-abd3-73b770f8fb03" (UID: "dc3d42a8-0810-462c-abd3-73b770f8fb03"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.961604 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c44b3c43-4aed-4726-a49e-693cd279bca6" (UID: "c44b3c43-4aed-4726-a49e-693cd279bca6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.975877 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-config-data" (OuterVolumeSpecName: "config-data") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.982126 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1e952720-9083-48e0-96d1-54f1cfacfbf9" (UID: "1e952720-9083-48e0-96d1-54f1cfacfbf9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.983467 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c44b3c43-4aed-4726-a49e-693cd279bca6" (UID: "c44b3c43-4aed-4726-a49e-693cd279bca6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.994879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "b4271125-14af-4748-97ad-ed766b2d26b8" (UID: "b4271125-14af-4748-97ad-ed766b2d26b8"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:49.999900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.008303 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data" (OuterVolumeSpecName: "config-data") pod "f559e642-5710-41ad-b508-a76cf28d62ca" (UID: "f559e642-5710-41ad-b508-a76cf28d62ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.013979 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e9999f0-5166-4fe0-9110-374b372ff6da" (UID: "2e9999f0-5166-4fe0-9110-374b372ff6da"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.018120 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-config-data" (OuterVolumeSpecName: "config-data") pod "555e309c-8c41-4ac1-8eca-60e203f92e4e" (UID: "555e309c-8c41-4ac1-8eca-60e203f92e4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.034997 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035052 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035062 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035071 4858 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf713f2-824f-4d23-bb3a-1b1f7ef99020-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035080 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035089 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035097 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/879cb25d-5d39-48df-ac21-505127e58fd1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035104 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035112 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3d42a8-0810-462c-abd3-73b770f8fb03-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035121 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035130 4858 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4271125-14af-4748-97ad-ed766b2d26b8-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035138 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035145 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9999f0-5166-4fe0-9110-374b372ff6da-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035153 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44b3c43-4aed-4726-a49e-693cd279bca6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035161 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e952720-9083-48e0-96d1-54f1cfacfbf9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035169 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da6e158-7f6d-434b-bd4a-9a902a5879d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.035177 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f559e642-5710-41ad-b508-a76cf28d62ca-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.038068 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "555e309c-8c41-4ac1-8eca-60e203f92e4e" (UID: "555e309c-8c41-4ac1-8eca-60e203f92e4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.137592 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.137633 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555e309c-8c41-4ac1-8eca-60e203f92e4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.137680 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:52.137658524 +0000 UTC m=+8953.979081590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70","Type":"ContainerDied","Data":"78b2b153ac1b3d9aed94d5ae9675a45b8e47b2a4cc4aa949dfee02e41d3b4cd6"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58695b9cb9-h2cjl" event={"ID":"c44b3c43-4aed-4726-a49e-693cd279bca6","Type":"ContainerDied","Data":"bcecdda848a13002e10d69cd33551948facb79bc1968678772f24017147be0e3"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59060e41-09d2-4441-8563-5302fd77a52d","Type":"ContainerDied","Data":"48243763ff91a842163928192fc2ea246f302325792033ccd2427519d16f31b0"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3d42a8-0810-462c-abd3-73b770f8fb03","Type":"ContainerDied","Data":"3d3f75511e59101b3bf5440d6488bcafc67c4b25b4a08c5566b7aae3df94b4ee"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8911-account-delete-lsr9n" event={"ID":"1c0f4278-ebde-458a-85b5-9f95824cee1a","Type":"ContainerDied","Data":"0786ac43f63fc86c1b0ae25e982450b6bafd98213c721a56bab5e454872c99b8"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178590 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0786ac43f63fc86c1b0ae25e982450b6bafd98213c721a56bab5e454872c99b8" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"555e309c-8c41-4ac1-8eca-60e203f92e4e","Type":"ContainerDied","Data":"67d7dacb9dd84b77bc7123d14218c050d3f94077f543d53dc2bdb195e13800a6"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e952720-9083-48e0-96d1-54f1cfacfbf9","Type":"ContainerDied","Data":"be6d76c6e63022d4269f76f4f45382ba6d9a577eda40fb9cbeffb17a7e09c94a"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8384e15-b249-44a6-8d35-8a2066b3da7b","Type":"ContainerDied","Data":"7a1b9aa9bf7fdcfe3b6dd842717d88716652a749a754b92b43ad5226f5e6ec33"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cf713f2-824f-4d23-bb3a-1b1f7ef99020","Type":"ContainerDied","Data":"373f8ea53fb8392f4d52049d9a81199da932aa3fd6137c5e8f6369312b58f59b"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84d7f7895d-dzj8l" event={"ID":"879cb25d-5d39-48df-ac21-505127e58fd1","Type":"ContainerDied","Data":"9d11dd0d0c260792034168f0178a2f5d52c9eb6de47e59d32c453ae1f0484a85"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican04c4-account-delete-8pfbj" event={"ID":"7c014721-aa5e-4b1e-93b7-36b6832df6c6","Type":"ContainerDied","Data":"8c6e5012f5aff653db545323992a0dd2c4aa2259f6aef7f23b60f080e644526f"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178688 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c6e5012f5aff653db545323992a0dd2c4aa2259f6aef7f23b60f080e644526f" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178697 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b4271125-14af-4748-97ad-ed766b2d26b8","Type":"ContainerDied","Data":"c222d0459fde1fa7c611a3b2d3152ef7a33b380e390942f5ad6ad138ec2d0ab9"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance034d-account-delete-lrkfs" event={"ID":"084a15f9-e534-46ad-b38a-17eeb1b6589e","Type":"ContainerDied","Data":"9118a6d66e855c4ecab5223b114637d34770ddab34112f5da29558ac484fee65"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178722 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9118a6d66e855c4ecab5223b114637d34770ddab34112f5da29558ac484fee65" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-554bc84945-x99pt" event={"ID":"f559e642-5710-41ad-b508-a76cf28d62ca","Type":"ContainerDied","Data":"7842f8034764b2924bf41d03e6be4b569c6b8d8f6de37bcfc0f1067002067f50"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.178756 4858 scope.go:117] "RemoveContainer" containerID="2616f6e010c2f47567c82c59233a83474d6307221bc0e3019310b01ca819c5e0" Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.240709 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.240783 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:52.240765773 +0000 UTC m=+8954.082188789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.334763 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.336718 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.339610 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:39:50 crc kubenswrapper[4858]: E1122 09:39:50.339669 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.511749 4858 scope.go:117] "RemoveContainer" containerID="ff441736a1f0bdc42df1f5f8ac8566ce878fc447391e44d3d92513cd53973a0c" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.697425 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c86c95b8-8h6xv" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.801308 4858 generic.go:334] "Generic (PLEG): container finished" podID="1354cd0c-52c3-4174-b012-21a2b5ea8324" containerID="f098abc2e40e7e1a013de3bcdfe604e5a7ae91217777b7915ebd28ba5482db6d" exitCode=0 Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.801370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7567c6b846-s845h" event={"ID":"1354cd0c-52c3-4174-b012-21a2b5ea8324","Type":"ContainerDied","Data":"f098abc2e40e7e1a013de3bcdfe604e5a7ae91217777b7915ebd28ba5482db6d"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.844692 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"0e49c73afb70240d2bfb3f0d91318ec2304f8e894976f6aea5e6c68307db741f"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.858799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e8384e15-b249-44a6-8d35-8a2066b3da7b","Type":"ContainerDied","Data":"d3e620de4d5b38f5a3b6436d53ac32053199e5dc69112d9c4a5a0c39c93238da"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.859012 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e620de4d5b38f5a3b6436d53ac32053199e5dc69112d9c4a5a0c39c93238da" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.886914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"59060e41-09d2-4441-8563-5302fd77a52d","Type":"ContainerDied","Data":"97895e7fd29c018ddbfbfc26421fe78f6078d6297fcc2c821595d8c5df1e2ea2"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.887280 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97895e7fd29c018ddbfbfc26421fe78f6078d6297fcc2c821595d8c5df1e2ea2" Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.899570 4858 generic.go:334] "Generic (PLEG): container finished" podID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerID="37be865a00cf89c403b4aeab789ef0fd27e0c3496d6c037ceca384efb5e151a4" exitCode=0 Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.899604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerDied","Data":"37be865a00cf89c403b4aeab789ef0fd27e0c3496d6c037ceca384efb5e151a4"} Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.913724 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cindera285-account-delete-9xdwn"] Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.932232 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cindera285-account-delete-9xdwn"] Nov 22 09:39:50 crc kubenswrapper[4858]: I1122 09:39:50.986956 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron6b93-account-delete-n54tn"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.003845 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron6b93-account-delete-n54tn"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.089423 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican04c4-account-delete-8pfbj"] Nov 22 09:39:51 crc kubenswrapper[4858]: E1122 09:39:51.089639 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:51 crc kubenswrapper[4858]: E1122 09:39:51.089692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:55.089679241 +0000 UTC m=+8956.931102247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.097058 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican04c4-account-delete-8pfbj"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.192339 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance034d-account-delete-lrkfs"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.198766 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance034d-account-delete-lrkfs"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.224307 4858 scope.go:117] "RemoveContainer" containerID="3e9f60a9242f5ea9166f64aec3d772c195c831a12e40616f61d15e94761b65aa" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.264307 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.273607 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.287851 4858 scope.go:117] "RemoveContainer" containerID="57907d16e0311bc717a33ae1f359ab9a46d08e1abe4ca40d8893d8086ef774ac" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.343744 4858 scope.go:117] "RemoveContainer" containerID="e8e0687f6df23a2cb8e5fca6694574c9fc79ab632a7cbd059eef1fbf16f9f711" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400230 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-confd\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-plugins\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-erlang-cookie\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400355 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-erlang-cookie\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400453 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-plugins-conf\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-plugins\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400564 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr4kv\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-kube-api-access-jr4kv\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-tls\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59060e41-09d2-4441-8563-5302fd77a52d-pod-info\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlcvp\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-kube-api-access-jlcvp\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400689 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59060e41-09d2-4441-8563-5302fd77a52d-erlang-cookie-secret\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400714 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-tls\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8384e15-b249-44a6-8d35-8a2066b3da7b-pod-info\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-plugins-conf\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-server-conf\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-server-conf\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.400926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-confd\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.402209 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") pod \"59060e41-09d2-4441-8563-5302fd77a52d\" (UID: \"59060e41-09d2-4441-8563-5302fd77a52d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.402273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8384e15-b249-44a6-8d35-8a2066b3da7b-erlang-cookie-secret\") pod \"e8384e15-b249-44a6-8d35-8a2066b3da7b\" (UID: \"e8384e15-b249-44a6-8d35-8a2066b3da7b\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.404704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.405939 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.406935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.407611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.410670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.411956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.416501 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-kube-api-access-jlcvp" (OuterVolumeSpecName: "kube-api-access-jlcvp") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "kube-api-access-jlcvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.419030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.421887 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59060e41-09d2-4441-8563-5302fd77a52d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.423061 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.428488 4858 scope.go:117] "RemoveContainer" containerID="3967676f8e95adb5ee5014b410d8fa6ed22970b37607a556cdda336ed986c928" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.428610 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.428803 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/59060e41-09d2-4441-8563-5302fd77a52d-pod-info" (OuterVolumeSpecName: "pod-info") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.429071 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-kube-api-access-jr4kv" (OuterVolumeSpecName: "kube-api-access-jr4kv") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "kube-api-access-jr4kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.432671 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.433513 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8384e15-b249-44a6-8d35-8a2066b3da7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.448658 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.460698 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e8384e15-b249-44a6-8d35-8a2066b3da7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.461201 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079" (OuterVolumeSpecName: "persistence") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.469505 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-mzr28"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.477957 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-mzr28"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.487446 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data" (OuterVolumeSpecName: "config-data") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.496079 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-aca4-account-create-kb47r"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.500078 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f" (OuterVolumeSpecName: "persistence") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-sg-core-conf-yaml\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504681 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-combined-ca-bundle\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-run-httpd\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504794 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-ceilometer-tls-certs\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-scripts\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504835 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-log-httpd\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504856 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-config-data\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.504902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt68j\" (UniqueName: \"kubernetes.io/projected/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-kube-api-access-jt68j\") pod \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\" (UID: \"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505220 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505252 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") on node \"crc\" " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505262 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8384e15-b249-44a6-8d35-8a2066b3da7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505280 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") on node \"crc\" " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505289 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505299 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505308 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505331 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505339 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505349 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505358 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr4kv\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-kube-api-access-jr4kv\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505366 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505374 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlcvp\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-kube-api-access-jlcvp\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505382 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59060e41-09d2-4441-8563-5302fd77a52d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505389 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59060e41-09d2-4441-8563-5302fd77a52d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505397 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.505405 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8384e15-b249-44a6-8d35-8a2066b3da7b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.508392 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.517380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-kube-api-access-jt68j" (OuterVolumeSpecName: "kube-api-access-jt68j") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "kube-api-access-jt68j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.515599 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.520431 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heataca4-account-delete-65j4m"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.522722 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heataca4-account-delete-65j4m" podUID="9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" containerName="mariadb-account-delete" containerID="cri-o://071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b" gracePeriod=30 Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.515406 4858 scope.go:117] "RemoveContainer" containerID="87056ef5db220c131bb3ec20fed1d41cb562684d629666af50d9c09b8a77410d" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.520836 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.533519 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.538019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data" (OuterVolumeSpecName: "config-data") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.554719 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.571534 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084a15f9-e534-46ad-b38a-17eeb1b6589e" path="/var/lib/kubelet/pods/084a15f9-e534-46ad-b38a-17eeb1b6589e/volumes" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.589186 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-scripts" (OuterVolumeSpecName: "scripts") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.589688 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" path="/var/lib/kubelet/pods/0da6e158-7f6d-434b-bd4a-9a902a5879d9/volumes" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.596408 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-server-conf" (OuterVolumeSpecName: "server-conf") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.596509 4858 scope.go:117] "RemoveContainer" containerID="5e0c7f07b403939e0b30d379cfb2f6f7c0e0f0331d4da8acd1e935938d2cf0d3" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.603027 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.603338 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079") on node "crc" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.609112 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726135ea-5ba7-49da-ac47-303d08f1ac58" path="/var/lib/kubelet/pods/726135ea-5ba7-49da-ac47-303d08f1ac58/volumes" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.610234 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c014721-aa5e-4b1e-93b7-36b6832df6c6" path="/var/lib/kubelet/pods/7c014721-aa5e-4b1e-93b7-36b6832df6c6/volumes" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.613559 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938520b5-d4e9-489e-8f92-642c144d69bc" path="/var/lib/kubelet/pods/938520b5-d4e9-489e-8f92-642c144d69bc/volumes" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.614261 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" path="/var/lib/kubelet/pods/ce9f6a1a-f6db-4db1-a07e-62baedc8fc60/volumes" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.614265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-kolla-config\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.614299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-scripts\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.614543 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.614739 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f") on node "crc" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.615480 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.650748 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-galera-tls-certs\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.651088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-internal-tls-certs\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.651738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-config-data\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.651936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-combined-ca-bundle\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.652028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-public-tls-certs\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.653236 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-fernet-keys\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.653369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-combined-ca-bundle\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.653518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-config-data-default\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.658332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.658631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-combined-ca-bundle\") pod \"f30dfd03-0897-4211-b0d7-aabfd726e408\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.658765 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-config-data\") pod \"f30dfd03-0897-4211-b0d7-aabfd726e408\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.658878 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-credential-keys\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.658993 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-operator-scripts\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.659121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clfwk\" (UniqueName: \"kubernetes.io/projected/1354cd0c-52c3-4174-b012-21a2b5ea8324-kube-api-access-clfwk\") pod \"1354cd0c-52c3-4174-b012-21a2b5ea8324\" (UID: \"1354cd0c-52c3-4174-b012-21a2b5ea8324\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.659247 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398c6958-f902-4b59-9afd-0275dea7251d-config-data-generated\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.663233 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5zvv\" (UniqueName: \"kubernetes.io/projected/398c6958-f902-4b59-9afd-0275dea7251d-kube-api-access-q5zvv\") pod \"398c6958-f902-4b59-9afd-0275dea7251d\" (UID: \"398c6958-f902-4b59-9afd-0275dea7251d\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.663510 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w24b7\" (UniqueName: \"kubernetes.io/projected/f30dfd03-0897-4211-b0d7-aabfd726e408-kube-api-access-w24b7\") pod \"f30dfd03-0897-4211-b0d7-aabfd726e408\" (UID: \"f30dfd03-0897-4211-b0d7-aabfd726e408\") " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.676110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30dfd03-0897-4211-b0d7-aabfd726e408-kube-api-access-w24b7" (OuterVolumeSpecName: "kube-api-access-w24b7") pod "f30dfd03-0897-4211-b0d7-aabfd726e408" (UID: "f30dfd03-0897-4211-b0d7-aabfd726e408"). InnerVolumeSpecName "kube-api-access-w24b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.680409 4858 scope.go:117] "RemoveContainer" containerID="0e440c8b36113cc42b6ddd774ee75f89f04beccb033e5b4e3d7827901f46cf17" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.681311 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-aca4-account-create-kb47r"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.681375 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.681392 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.681713 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.698050 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.698181 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.698380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/398c6958-f902-4b59-9afd-0275dea7251d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.698719 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.699979 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.702534 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703162 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a069c1b-f1a2-4a50-b828-7c411ef6c01f\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703189 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b38d528c-3836-43f6-a9ff-cc0d91f42079\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703206 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398c6958-f902-4b59-9afd-0275dea7251d-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703240 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w24b7\" (UniqueName: \"kubernetes.io/projected/f30dfd03-0897-4211-b0d7-aabfd726e408-kube-api-access-w24b7\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703249 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703260 4858 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703273 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703283 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703292 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703458 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398c6958-f902-4b59-9afd-0275dea7251d-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703500 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt68j\" (UniqueName: \"kubernetes.io/projected/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-kube-api-access-jt68j\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703510 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59060e41-09d2-4441-8563-5302fd77a52d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.703546 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8384e15-b249-44a6-8d35-8a2066b3da7b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.709707 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.733614 4858 scope.go:117] "RemoveContainer" containerID="2a569e7aef5c1478654a43f23ff834b089ab7b81d90062f8bc434d0602c00539" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.751518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1354cd0c-52c3-4174-b012-21a2b5ea8324-kube-api-access-clfwk" (OuterVolumeSpecName: "kube-api-access-clfwk") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "kube-api-access-clfwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.756934 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.757019 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.765388 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.770521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398c6958-f902-4b59-9afd-0275dea7251d-kube-api-access-q5zvv" (OuterVolumeSpecName: "kube-api-access-q5zvv") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "kube-api-access-q5zvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.772609 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-scripts" (OuterVolumeSpecName: "scripts") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.784917 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.791446 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.797411 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.805185 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clfwk\" (UniqueName: \"kubernetes.io/projected/1354cd0c-52c3-4174-b012-21a2b5ea8324-kube-api-access-clfwk\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.805213 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5zvv\" (UniqueName: \"kubernetes.io/projected/398c6958-f902-4b59-9afd-0275dea7251d-kube-api-access-q5zvv\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.805221 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.805231 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.805240 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.810522 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.816788 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff" (OuterVolumeSpecName: "mysql-db") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.832158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-config-data" (OuterVolumeSpecName: "config-data") pod "f30dfd03-0897-4211-b0d7-aabfd726e408" (UID: "f30dfd03-0897-4211-b0d7-aabfd726e408"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.841960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74977f9d76-k6dlw"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.852360 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-74977f9d76-k6dlw"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.866798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-config-data" (OuterVolumeSpecName: "config-data") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.878396 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-554bc84945-x99pt"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.883407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.890858 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-554bc84945-x99pt"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.898338 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement8911-account-delete-lsr9n"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.900905 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f30dfd03-0897-4211-b0d7-aabfd726e408" (UID: "f30dfd03-0897-4211-b0d7-aabfd726e408"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.900973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.905745 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.906842 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement8911-account-delete-lsr9n"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.906881 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.907983 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.908007 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30dfd03-0897-4211-b0d7-aabfd726e408-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.908035 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.908044 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.908052 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.908080 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") on node \"crc\" " Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.913811 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7567c6b846-s845h" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.913938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7567c6b846-s845h" event={"ID":"1354cd0c-52c3-4174-b012-21a2b5ea8324","Type":"ContainerDied","Data":"2dcbcd5517ab0f48c19cd12a19a6a3e26f9a3af4196ebebe3ed9a963a4979fbd"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.917285 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-575cc76dd7-swvhx"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.917841 4858 generic.go:334] "Generic (PLEG): container finished" podID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" exitCode=0 Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.917910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f30dfd03-0897-4211-b0d7-aabfd726e408","Type":"ContainerDied","Data":"f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.917933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f30dfd03-0897-4211-b0d7-aabfd726e408","Type":"ContainerDied","Data":"577213e76612d6fced4a6876b82c510d41fc3c1993e4ddcb400b7891a06e38d1"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.918045 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.918537 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.918673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.922855 4858 generic.go:334] "Generic (PLEG): container finished" podID="398c6958-f902-4b59-9afd-0275dea7251d" containerID="cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e" exitCode=0 Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.922965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398c6958-f902-4b59-9afd-0275dea7251d","Type":"ContainerDied","Data":"cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.923002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398c6958-f902-4b59-9afd-0275dea7251d","Type":"ContainerDied","Data":"2a4d9a98a563fe256ec2c478d9c2a855290360e126d5e3cc7112649d1622a622"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.923091 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.925818 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-575cc76dd7-swvhx"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.928559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" (UID: "0cf93c03-ace2-450f-9dff-6ea5e6fa72d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.930796 4858 generic.go:334] "Generic (PLEG): container finished" podID="419367a7-1838-4692-b6fc-f266985765d7" containerID="9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364" exitCode=0 Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.930872 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"419367a7-1838-4692-b6fc-f266985765d7","Type":"ContainerDied","Data":"9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.930903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"419367a7-1838-4692-b6fc-f266985765d7","Type":"ContainerDied","Data":"951a9be14148e095ca4c2e063c098b84fadfbe76d48a87be89075693d1592785"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.930919 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951a9be14148e095ca4c2e063c098b84fadfbe76d48a87be89075693d1592785" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.933733 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-865798754b-wklbv"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.945478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0cf93c03-ace2-450f-9dff-6ea5e6fa72d8","Type":"ContainerDied","Data":"17559e33db3a210ab25ba541c0fefc0f1f394263d66cd03501c6412f059d82a6"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.945629 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.950228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "59060e41-09d2-4441-8563-5302fd77a52d" (UID: "59060e41-09d2-4441-8563-5302fd77a52d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.959715 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-865798754b-wklbv"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.960871 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.961699 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff") on node "crc" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.966904 4858 generic.go:334] "Generic (PLEG): container finished" podID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" exitCode=0 Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.967020 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.967517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff","Type":"ContainerDied","Data":"94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.967557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff","Type":"ContainerDied","Data":"60f74af907b70ecb84f1d003df80f0b9efd231fc6135a270c4e922eaf40dc680"} Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.967574 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f74af907b70ecb84f1d003df80f0b9efd231fc6135a270c4e922eaf40dc680" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.967629 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.974968 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.980183 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.989081 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi04c7-account-delete-q782b"] Nov 22 09:39:51 crc kubenswrapper[4858]: I1122 09:39:51.989281 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novaapi04c7-account-delete-q782b" podUID="15c7de97-b620-4e9b-8e17-27da546d6fb8" containerName="mariadb-account-delete" containerID="cri-o://f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1" gracePeriod=30 Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.001503 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.007245 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.009966 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-072e7310-c286-44ea-b719-cd8dfc70e2ff\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.010002 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59060e41-09d2-4441-8563-5302fd77a52d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.010019 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.010031 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.010044 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.009971 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "398c6958-f902-4b59-9afd-0275dea7251d" (UID: "398c6958-f902-4b59-9afd-0275dea7251d"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.012772 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84d7f7895d-dzj8l"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.020960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-config-data" (OuterVolumeSpecName: "config-data") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.021045 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-84d7f7895d-dzj8l"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.024370 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58695b9cb9-h2cjl"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.027854 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.028349 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-58695b9cb9-h2cjl"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.036446 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1354cd0c-52c3-4174-b012-21a2b5ea8324" (UID: "1354cd0c-52c3-4174-b012-21a2b5ea8324"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.040512 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0b969-account-delete-2lntr"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.040726 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novacell0b969-account-delete-2lntr" podUID="eb9be543-7566-4423-b4ed-5d9596cf21a4" containerName="mariadb-account-delete" containerID="cri-o://3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c" gracePeriod=30 Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.055131 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e8384e15-b249-44a6-8d35-8a2066b3da7b" (UID: "e8384e15-b249-44a6-8d35-8a2066b3da7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.097648 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.105668 4858 scope.go:117] "RemoveContainer" containerID="ffde0c5535e5575efcd312c44becdc816a46ec2830edcdf8c7cac194047d0a3d" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.105729 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.111710 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.111735 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.111746 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1354cd0c-52c3-4174-b012-21a2b5ea8324-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.111802 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8384e15-b249-44a6-8d35-8a2066b3da7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.111815 4858 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398c6958-f902-4b59-9afd-0275dea7251d-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.155035 4858 scope.go:117] "RemoveContainer" containerID="cd4ac53c7c037b114448c74d1fb5ca115e64028fa709acf595b4d3e033563293" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.155166 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.162863 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.171660 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.178995 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.188518 4858 scope.go:117] "RemoveContainer" containerID="55ca57bd132c43b406a7e2f78d44ccc4ccfef51b3c54f7deb21f3fcdf315f42d" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.188718 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.193283 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.212823 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-858lb\" (UniqueName: \"kubernetes.io/projected/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-kube-api-access-858lb\") pod \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.212869 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-config-data\") pod \"419367a7-1838-4692-b6fc-f266985765d7\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.212988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-combined-ca-bundle\") pod \"419367a7-1838-4692-b6fc-f266985765d7\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.213030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-config-data\") pod \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.213062 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-combined-ca-bundle\") pod \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\" (UID: \"a2e9b1a0-f2e8-4537-86cb-7651a5f44fff\") " Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.213092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2rnw\" (UniqueName: \"kubernetes.io/projected/419367a7-1838-4692-b6fc-f266985765d7-kube-api-access-m2rnw\") pod \"419367a7-1838-4692-b6fc-f266985765d7\" (UID: \"419367a7-1838-4692-b6fc-f266985765d7\") " Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.213483 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.213536 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:56.213521976 +0000 UTC m=+8958.054944982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.219402 4858 scope.go:117] "RemoveContainer" containerID="04b8eabacd40872b6a27353dabf534bacf39a98dba7ea7e75a7efb827a971e4a" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.222686 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-kube-api-access-858lb" (OuterVolumeSpecName: "kube-api-access-858lb") pod "a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" (UID: "a2e9b1a0-f2e8-4537-86cb-7651a5f44fff"). InnerVolumeSpecName "kube-api-access-858lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.233052 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419367a7-1838-4692-b6fc-f266985765d7-kube-api-access-m2rnw" (OuterVolumeSpecName: "kube-api-access-m2rnw") pod "419367a7-1838-4692-b6fc-f266985765d7" (UID: "419367a7-1838-4692-b6fc-f266985765d7"). InnerVolumeSpecName "kube-api-access-m2rnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.261559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-config-data" (OuterVolumeSpecName: "config-data") pod "a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" (UID: "a2e9b1a0-f2e8-4537-86cb-7651a5f44fff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.284470 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.286912 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.292960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7567c6b846-s845h"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.298242 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7567c6b846-s845h"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.304483 4858 scope.go:117] "RemoveContainer" containerID="0e9af0329f586f29a072f29f596f2dfaa4a85abfbc8d919d8bc5c0646f5a690e" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.315834 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-858lb\" (UniqueName: \"kubernetes.io/projected/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-kube-api-access-858lb\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.315861 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.315870 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2rnw\" (UniqueName: \"kubernetes.io/projected/419367a7-1838-4692-b6fc-f266985765d7-kube-api-access-m2rnw\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.315930 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.315976 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:39:56.315962635 +0000 UTC m=+8958.157385641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.323243 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.328673 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.354500 4858 scope.go:117] "RemoveContainer" containerID="bfc6172709f280143555d90466293bfa2c52e1d1c69bc716075bf79ffcfb671e" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.377720 4858 scope.go:117] "RemoveContainer" containerID="f098abc2e40e7e1a013de3bcdfe604e5a7ae91217777b7915ebd28ba5482db6d" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.402266 4858 scope.go:117] "RemoveContainer" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.421494 4858 scope.go:117] "RemoveContainer" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.422052 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb\": container with ID starting with f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb not found: ID does not exist" containerID="f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.422094 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb"} err="failed to get container status \"f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb\": rpc error: code = NotFound desc = could not find container \"f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb\": container with ID starting with f025171164474673be4ac62d1a53018d67233bde09f38f10bb3c639d867d2feb not found: ID does not exist" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.422120 4858 scope.go:117] "RemoveContainer" containerID="cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.450481 4858 scope.go:117] "RemoveContainer" containerID="a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.662237 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "419367a7-1838-4692-b6fc-f266985765d7" (UID: "419367a7-1838-4692-b6fc-f266985765d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.685989 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-config-data" (OuterVolumeSpecName: "config-data") pod "419367a7-1838-4692-b6fc-f266985765d7" (UID: "419367a7-1838-4692-b6fc-f266985765d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.689108 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" (UID: "a2e9b1a0-f2e8-4537-86cb-7651a5f44fff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.722774 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.722805 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419367a7-1838-4692-b6fc-f266985765d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.722820 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.767810 4858 scope.go:117] "RemoveContainer" containerID="cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e" Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.768222 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e\": container with ID starting with cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e not found: ID does not exist" containerID="cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.768260 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e"} err="failed to get container status \"cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e\": rpc error: code = NotFound desc = could not find container \"cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e\": container with ID starting with cbab755692b90e31affd1d2723ff914dd06859b8488135d8491adc3bf2e4db5e not found: ID does not exist" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.768296 4858 scope.go:117] "RemoveContainer" containerID="a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619" Nov 22 09:39:52 crc kubenswrapper[4858]: E1122 09:39:52.768661 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619\": container with ID starting with a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619 not found: ID does not exist" containerID="a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.768682 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619"} err="failed to get container status \"a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619\": rpc error: code = NotFound desc = could not find container \"a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619\": container with ID starting with a255736239d52c3233dec321ec5505ff4e12bcf3a6b371357b5c38f0eb7aa619 not found: ID does not exist" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.768695 4858 scope.go:117] "RemoveContainer" containerID="7c22c1647b976812b9a9e2e33c7532864b50fff449effa75e59831dd2b9c3c8f" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.811421 4858 scope.go:117] "RemoveContainer" containerID="c025194ddf7a068b573c198cafa5d2010ef4df2be27ccc43f8c168cace634da0" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.829664 4858 scope.go:117] "RemoveContainer" containerID="37be865a00cf89c403b4aeab789ef0fd27e0c3496d6c037ceca384efb5e151a4" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.848356 4858 scope.go:117] "RemoveContainer" containerID="921a60c315076bfa09bfff124ab92deecdc0625f09d81b3bb232d4ef1e293e81" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.958521 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gr6lp" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="registry-server" probeResult="failure" output=< Nov 22 09:39:52 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Nov 22 09:39:52 crc kubenswrapper[4858]: > Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.996223 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_acfb9d28-4ab9-4fb2-b490-f82a2ca905a4/ovn-northd/0.log" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.996280 4858 generic.go:334] "Generic (PLEG): container finished" podID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" exitCode=139 Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.996373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4","Type":"ContainerDied","Data":"07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f"} Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.997818 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:39:52 crc kubenswrapper[4858]: I1122 09:39:52.997839 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.046076 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.052549 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.070981 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.077789 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:39:53 crc kubenswrapper[4858]: E1122 09:39:53.364170 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f is running failed: container process not found" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:53 crc kubenswrapper[4858]: E1122 09:39:53.364717 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f is running failed: container process not found" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:53 crc kubenswrapper[4858]: E1122 09:39:53.364964 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f is running failed: container process not found" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:39:53 crc kubenswrapper[4858]: E1122 09:39:53.365005 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.554723 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ad0b71-b272-4af6-a216-5fd4432bb7d7" path="/var/lib/kubelet/pods/04ad0b71-b272-4af6-a216-5fd4432bb7d7/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.556015 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" path="/var/lib/kubelet/pods/0cf93c03-ace2-450f-9dff-6ea5e6fa72d8/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.557264 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1354cd0c-52c3-4174-b012-21a2b5ea8324" path="/var/lib/kubelet/pods/1354cd0c-52c3-4174-b012-21a2b5ea8324/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.558798 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" path="/var/lib/kubelet/pods/1659016a-e2b7-4dbd-8ad1-56bef9995d64/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.559600 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0f4278-ebde-458a-85b5-9f95824cee1a" path="/var/lib/kubelet/pods/1c0f4278-ebde-458a-85b5-9f95824cee1a/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.560564 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" path="/var/lib/kubelet/pods/1e952720-9083-48e0-96d1-54f1cfacfbf9/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.561169 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" path="/var/lib/kubelet/pods/2e9999f0-5166-4fe0-9110-374b372ff6da/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.561832 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398c6958-f902-4b59-9afd-0275dea7251d" path="/var/lib/kubelet/pods/398c6958-f902-4b59-9afd-0275dea7251d/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.563004 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419367a7-1838-4692-b6fc-f266985765d7" path="/var/lib/kubelet/pods/419367a7-1838-4692-b6fc-f266985765d7/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.563573 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" path="/var/lib/kubelet/pods/48b023fd-a47e-4fac-b75f-50e32cd8ed68/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.564162 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf713f2-824f-4d23-bb3a-1b1f7ef99020" path="/var/lib/kubelet/pods/4cf713f2-824f-4d23-bb3a-1b1f7ef99020/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.566027 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" path="/var/lib/kubelet/pods/555e309c-8c41-4ac1-8eca-60e203f92e4e/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.566849 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59060e41-09d2-4441-8563-5302fd77a52d" path="/var/lib/kubelet/pods/59060e41-09d2-4441-8563-5302fd77a52d/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.585948 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" path="/var/lib/kubelet/pods/879cb25d-5d39-48df-ac21-505127e58fd1/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.586542 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" path="/var/lib/kubelet/pods/a2e9b1a0-f2e8-4537-86cb-7651a5f44fff/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.587509 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4271125-14af-4748-97ad-ed766b2d26b8" path="/var/lib/kubelet/pods/b4271125-14af-4748-97ad-ed766b2d26b8/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.588026 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" path="/var/lib/kubelet/pods/bda2acef-1ebf-4106-b75f-57d3c2a80758/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.588585 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44b3c43-4aed-4726-a49e-693cd279bca6" path="/var/lib/kubelet/pods/c44b3c43-4aed-4726-a49e-693cd279bca6/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.589620 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" path="/var/lib/kubelet/pods/d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.590385 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" path="/var/lib/kubelet/pods/dc3d42a8-0810-462c-abd3-73b770f8fb03/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.591072 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" path="/var/lib/kubelet/pods/e8384e15-b249-44a6-8d35-8a2066b3da7b/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.592181 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" path="/var/lib/kubelet/pods/f30dfd03-0897-4211-b0d7-aabfd726e408/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.592815 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559e642-5710-41ad-b508-a76cf28d62ca" path="/var/lib/kubelet/pods/f559e642-5710-41ad-b508-a76cf28d62ca/volumes" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.650867 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-865798754b-wklbv" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.1.52:9311/healthcheck\": dial tcp 10.217.1.52:9311: i/o timeout" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.651074 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-865798754b-wklbv" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.1.52:9311/healthcheck\": context deadline exceeded" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.776635 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.776905 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="1e5b4cdf-1c7e-47c4-8921-00df1e643887" containerName="adoption" containerID="cri-o://5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4" gracePeriod=30 Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.788451 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_acfb9d28-4ab9-4fb2-b490-f82a2ca905a4/ovn-northd/0.log" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.788513 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839636 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-rundir\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839699 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-northd-tls-certs\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-combined-ca-bundle\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-config\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-metrics-certs-tls-certs\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbbq\" (UniqueName: \"kubernetes.io/projected/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-kube-api-access-qnbbq\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.839946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-scripts\") pod \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\" (UID: \"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4\") " Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.840497 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.840792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-scripts" (OuterVolumeSpecName: "scripts") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.841760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-config" (OuterVolumeSpecName: "config") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.855393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-kube-api-access-qnbbq" (OuterVolumeSpecName: "kube-api-access-qnbbq") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "kube-api-access-qnbbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.891478 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.911803 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.931380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" (UID: "acfb9d28-4ab9-4fb2-b490-f82a2ca905a4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941833 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941875 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941894 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941912 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941931 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbbq\" (UniqueName: \"kubernetes.io/projected/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-kube-api-access-qnbbq\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941947 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:53 crc kubenswrapper[4858]: I1122 09:39:53.941962 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.019350 4858 generic.go:334] "Generic (PLEG): container finished" podID="d38ef80a-bbad-4072-a37b-1e355a943447" containerID="412e82414159e7ac3a4aa5c2cccb641255d6bef151b2b51f1cf479bfc2da047b" exitCode=0 Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.019514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc86c6f7-88xlp" event={"ID":"d38ef80a-bbad-4072-a37b-1e355a943447","Type":"ContainerDied","Data":"412e82414159e7ac3a4aa5c2cccb641255d6bef151b2b51f1cf479bfc2da047b"} Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.021994 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_acfb9d28-4ab9-4fb2-b490-f82a2ca905a4/ovn-northd/0.log" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.022064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"acfb9d28-4ab9-4fb2-b490-f82a2ca905a4","Type":"ContainerDied","Data":"75690b4e9af7a56e00f17a58b8d1e76359ca5b7d031ccb11514048168edcd317"} Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.022114 4858 scope.go:117] "RemoveContainer" containerID="67e1251b14e7b433d9d3a2e216ea575663775c6d203919cad452b0e46788dce2" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.022310 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.061588 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.071408 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.077448 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.077674 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="7b14e62d-03f3-44cf-9b81-f5c0511865cd" containerName="adoption" containerID="cri-o://748c7fd5b8d2394c9cb02c31b7296c97713c51013b8ab56c9ede3e3f67b3d1dd" gracePeriod=30 Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.082872 4858 scope.go:117] "RemoveContainer" containerID="07a1d573a57941565cdd19b4933479c87860ca32dd2521fac40a24bd6965772f" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.515742 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683505 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-combined-ca-bundle\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683546 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-ovndb-tls-certs\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683594 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-config\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683622 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-public-tls-certs\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-httpd-config\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-internal-tls-certs\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.683792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxrx\" (UniqueName: \"kubernetes.io/projected/d38ef80a-bbad-4072-a37b-1e355a943447-kube-api-access-xsxrx\") pod \"d38ef80a-bbad-4072-a37b-1e355a943447\" (UID: \"d38ef80a-bbad-4072-a37b-1e355a943447\") " Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.702601 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38ef80a-bbad-4072-a37b-1e355a943447-kube-api-access-xsxrx" (OuterVolumeSpecName: "kube-api-access-xsxrx") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "kube-api-access-xsxrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.712290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.723860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-config" (OuterVolumeSpecName: "config") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.736139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.745309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.752020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.785835 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.785896 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.785920 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.785945 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxrx\" (UniqueName: \"kubernetes.io/projected/d38ef80a-bbad-4072-a37b-1e355a943447-kube-api-access-xsxrx\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.785968 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.785990 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.855229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d38ef80a-bbad-4072-a37b-1e355a943447" (UID: "d38ef80a-bbad-4072-a37b-1e355a943447"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:39:54 crc kubenswrapper[4858]: I1122 09:39:54.886976 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38ef80a-bbad-4072-a37b-1e355a943447-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.031237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc86c6f7-88xlp" event={"ID":"d38ef80a-bbad-4072-a37b-1e355a943447","Type":"ContainerDied","Data":"c516c0accc10540d6d5055e19e85dee17c6755eef19e87758313171f9011f512"} Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.031590 4858 scope.go:117] "RemoveContainer" containerID="96d625e1d523edde845f7074cc2ca87e3c4b5c2c1898cd03e2d07a4a1aab3b91" Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.031260 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc86c6f7-88xlp" Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.062637 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dc86c6f7-88xlp"] Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.068480 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7dc86c6f7-88xlp"] Nov 22 09:39:55 crc kubenswrapper[4858]: E1122 09:39:55.091743 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:55 crc kubenswrapper[4858]: E1122 09:39:55.091853 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:03.0918301 +0000 UTC m=+8964.933253146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.110764 4858 scope.go:117] "RemoveContainer" containerID="412e82414159e7ac3a4aa5c2cccb641255d6bef151b2b51f1cf479bfc2da047b" Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.550200 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" path="/var/lib/kubelet/pods/acfb9d28-4ab9-4fb2-b490-f82a2ca905a4/volumes" Nov 22 09:39:55 crc kubenswrapper[4858]: I1122 09:39:55.551054 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" path="/var/lib/kubelet/pods/d38ef80a-bbad-4072-a37b-1e355a943447/volumes" Nov 22 09:39:56 crc kubenswrapper[4858]: E1122 09:39:56.217767 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:56 crc kubenswrapper[4858]: E1122 09:39:56.217851 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:04.217833505 +0000 UTC m=+8966.059256531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:39:56 crc kubenswrapper[4858]: E1122 09:39:56.319366 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:39:56 crc kubenswrapper[4858]: E1122 09:39:56.319444 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:04.319427747 +0000 UTC m=+8966.160850763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:40:00 crc kubenswrapper[4858]: E1122 09:40:00.325948 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:00 crc kubenswrapper[4858]: E1122 09:40:00.328733 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:00 crc kubenswrapper[4858]: E1122 09:40:00.330033 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:00 crc kubenswrapper[4858]: E1122 09:40:00.330077 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:40:00 crc kubenswrapper[4858]: I1122 09:40:00.694687 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c86c95b8-8h6xv" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Nov 22 09:40:01 crc kubenswrapper[4858]: I1122 09:40:01.980921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:40:02 crc kubenswrapper[4858]: I1122 09:40:02.055498 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:40:02 crc kubenswrapper[4858]: I1122 09:40:02.776102 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr6lp"] Nov 22 09:40:03 crc kubenswrapper[4858]: I1122 09:40:03.139651 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gr6lp" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="registry-server" containerID="cri-o://0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868" gracePeriod=2 Nov 22 09:40:03 crc kubenswrapper[4858]: E1122 09:40:03.151094 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:40:03 crc kubenswrapper[4858]: E1122 09:40:03.151166 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:19.151152379 +0000 UTC m=+8980.992575385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.090452 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.137059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-utilities\") pod \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.137146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-catalog-content\") pod \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.137200 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpjjr\" (UniqueName: \"kubernetes.io/projected/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-kube-api-access-kpjjr\") pod \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\" (UID: \"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966\") " Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.137797 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-utilities" (OuterVolumeSpecName: "utilities") pod "4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" (UID: "4ba14a1f-1786-4d30-a0ab-12ffd6ef1966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.155008 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-kube-api-access-kpjjr" (OuterVolumeSpecName: "kube-api-access-kpjjr") pod "4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" (UID: "4ba14a1f-1786-4d30-a0ab-12ffd6ef1966"). InnerVolumeSpecName "kube-api-access-kpjjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.161816 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerID="0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868" exitCode=0 Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.161857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerDied","Data":"0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868"} Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.161889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr6lp" event={"ID":"4ba14a1f-1786-4d30-a0ab-12ffd6ef1966","Type":"ContainerDied","Data":"d6caa434ae59e647ef258c4c9b8c1986e49dd5b80ae4fcadc75d3bc2c05689fc"} Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.161909 4858 scope.go:117] "RemoveContainer" containerID="0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.161924 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr6lp" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.188992 4858 scope.go:117] "RemoveContainer" containerID="e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.207437 4858 scope.go:117] "RemoveContainer" containerID="85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.234187 4858 scope.go:117] "RemoveContainer" containerID="0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868" Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.234656 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868\": container with ID starting with 0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868 not found: ID does not exist" containerID="0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.234689 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868"} err="failed to get container status \"0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868\": rpc error: code = NotFound desc = could not find container \"0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868\": container with ID starting with 0599131f8d5eb672e17aacae4426ea16050d0bb8d0a490d43331e761745c6868 not found: ID does not exist" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.234714 4858 scope.go:117] "RemoveContainer" containerID="e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779" Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.235407 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779\": container with ID starting with e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779 not found: ID does not exist" containerID="e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.235446 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779"} err="failed to get container status \"e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779\": rpc error: code = NotFound desc = could not find container \"e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779\": container with ID starting with e3327b44ed4fd0856501dc4b055e9ea3b475f832eea94017ad1c5711379c1779 not found: ID does not exist" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.235475 4858 scope.go:117] "RemoveContainer" containerID="85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1" Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.235810 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1\": container with ID starting with 85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1 not found: ID does not exist" containerID="85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.235929 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1"} err="failed to get container status \"85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1\": rpc error: code = NotFound desc = could not find container \"85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1\": container with ID starting with 85df5e2c949c315418e1cae418e59977c594497d75d4443bb7e9f309052931a1 not found: ID does not exist" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.239303 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.239343 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpjjr\" (UniqueName: \"kubernetes.io/projected/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-kube-api-access-kpjjr\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.239466 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.239547 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:20.239526111 +0000 UTC m=+8982.080949177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.246692 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" (UID: "4ba14a1f-1786-4d30-a0ab-12ffd6ef1966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.342946 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:40:04 crc kubenswrapper[4858]: E1122 09:40:04.343038 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:20.343018803 +0000 UTC m=+8982.184441809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.342954 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.494353 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr6lp"] Nov 22 09:40:04 crc kubenswrapper[4858]: I1122 09:40:04.501404 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gr6lp"] Nov 22 09:40:05 crc kubenswrapper[4858]: I1122 09:40:05.551440 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" path="/var/lib/kubelet/pods/4ba14a1f-1786-4d30-a0ab-12ffd6ef1966/volumes" Nov 22 09:40:10 crc kubenswrapper[4858]: E1122 09:40:10.326969 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:10 crc kubenswrapper[4858]: E1122 09:40:10.330297 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:10 crc kubenswrapper[4858]: E1122 09:40:10.332138 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:10 crc kubenswrapper[4858]: E1122 09:40:10.332210 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:40:10 crc kubenswrapper[4858]: I1122 09:40:10.704087 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c86c95b8-8h6xv" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.123:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.123:8443: connect: connection refused" Nov 22 09:40:10 crc kubenswrapper[4858]: I1122 09:40:10.704223 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:40:12 crc kubenswrapper[4858]: I1122 09:40:12.934073 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.090359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-etc-machine-id\") pod \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.090716 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljr6l\" (UniqueName: \"kubernetes.io/projected/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-kube-api-access-ljr6l\") pod \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.090857 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-scripts\") pod \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.091011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data-custom\") pod \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.091123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-combined-ca-bundle\") pod \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.091261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data\") pod \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\" (UID: \"d53819e9-9206-49f4-a1a7-2d9459fcc7c7\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.090455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d53819e9-9206-49f4-a1a7-2d9459fcc7c7" (UID: "d53819e9-9206-49f4-a1a7-2d9459fcc7c7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.093689 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.182968 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d53819e9-9206-49f4-a1a7-2d9459fcc7c7" (UID: "d53819e9-9206-49f4-a1a7-2d9459fcc7c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.183108 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-kube-api-access-ljr6l" (OuterVolumeSpecName: "kube-api-access-ljr6l") pod "d53819e9-9206-49f4-a1a7-2d9459fcc7c7" (UID: "d53819e9-9206-49f4-a1a7-2d9459fcc7c7"). InnerVolumeSpecName "kube-api-access-ljr6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.185457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-scripts" (OuterVolumeSpecName: "scripts") pod "d53819e9-9206-49f4-a1a7-2d9459fcc7c7" (UID: "d53819e9-9206-49f4-a1a7-2d9459fcc7c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.196128 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljr6l\" (UniqueName: \"kubernetes.io/projected/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-kube-api-access-ljr6l\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.196175 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.196192 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.214882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d53819e9-9206-49f4-a1a7-2d9459fcc7c7" (UID: "d53819e9-9206-49f4-a1a7-2d9459fcc7c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.272073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data" (OuterVolumeSpecName: "config-data") pod "d53819e9-9206-49f4-a1a7-2d9459fcc7c7" (UID: "d53819e9-9206-49f4-a1a7-2d9459fcc7c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.278022 4858 generic.go:334] "Generic (PLEG): container finished" podID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerID="ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650" exitCode=137 Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.278103 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d53819e9-9206-49f4-a1a7-2d9459fcc7c7","Type":"ContainerDied","Data":"ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650"} Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.278229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d53819e9-9206-49f4-a1a7-2d9459fcc7c7","Type":"ContainerDied","Data":"a92c87717e1b3d301360396f4d4f5e7faf34a81696ea50b685f7f944d84c09ee"} Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.278262 4858 scope.go:117] "RemoveContainer" containerID="44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.278118 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.289017 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee404aa4-d838-4368-9e25-6648adde67ee" containerID="6a0b3388f3b07e344f0aa419e784922d216578af23d8b90ea1471324a5e1ccfa" exitCode=137 Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.289132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c86c95b8-8h6xv" event={"ID":"ee404aa4-d838-4368-9e25-6648adde67ee","Type":"ContainerDied","Data":"6a0b3388f3b07e344f0aa419e784922d216578af23d8b90ea1471324a5e1ccfa"} Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.297460 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.297597 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d53819e9-9206-49f4-a1a7-2d9459fcc7c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.324209 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.330741 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.387992 4858 scope.go:117] "RemoveContainer" containerID="ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.420790 4858 scope.go:117] "RemoveContainer" containerID="44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513" Nov 22 09:40:13 crc kubenswrapper[4858]: E1122 09:40:13.421690 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513\": container with ID starting with 44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513 not found: ID does not exist" containerID="44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.421908 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513"} err="failed to get container status \"44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513\": rpc error: code = NotFound desc = could not find container \"44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513\": container with ID starting with 44cf9b7ca83d7f79932f86109748d0d35598764e3e19bf27ec0b6a25908fc513 not found: ID does not exist" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.422095 4858 scope.go:117] "RemoveContainer" containerID="ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650" Nov 22 09:40:13 crc kubenswrapper[4858]: E1122 09:40:13.422846 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650\": container with ID starting with ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650 not found: ID does not exist" containerID="ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.422885 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650"} err="failed to get container status \"ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650\": rpc error: code = NotFound desc = could not find container \"ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650\": container with ID starting with ce669f6c7de244c9c5e2eed0c30c0fee5369a41a794d3f4ca936d45da66c7650 not found: ID does not exist" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.558651 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" path="/var/lib/kubelet/pods/d53819e9-9206-49f4-a1a7-2d9459fcc7c7/volumes" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.688453 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-secret-key\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807648 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78zqd\" (UniqueName: \"kubernetes.io/projected/ee404aa4-d838-4368-9e25-6648adde67ee-kube-api-access-78zqd\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807708 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-tls-certs\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807763 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404aa4-d838-4368-9e25-6648adde67ee-logs\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-scripts\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807819 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-combined-ca-bundle\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.807888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-config-data\") pod \"ee404aa4-d838-4368-9e25-6648adde67ee\" (UID: \"ee404aa4-d838-4368-9e25-6648adde67ee\") " Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.809293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee404aa4-d838-4368-9e25-6648adde67ee-logs" (OuterVolumeSpecName: "logs") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.813050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.816782 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee404aa4-d838-4368-9e25-6648adde67ee-kube-api-access-78zqd" (OuterVolumeSpecName: "kube-api-access-78zqd") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "kube-api-access-78zqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.829712 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-config-data" (OuterVolumeSpecName: "config-data") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.832845 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.835931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-scripts" (OuterVolumeSpecName: "scripts") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.859762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ee404aa4-d838-4368-9e25-6648adde67ee" (UID: "ee404aa4-d838-4368-9e25-6648adde67ee"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909494 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909536 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78zqd\" (UniqueName: \"kubernetes.io/projected/ee404aa4-d838-4368-9e25-6648adde67ee-kube-api-access-78zqd\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909551 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909562 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404aa4-d838-4368-9e25-6648adde67ee-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909574 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909585 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404aa4-d838-4368-9e25-6648adde67ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:13 crc kubenswrapper[4858]: I1122 09:40:13.909596 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee404aa4-d838-4368-9e25-6648adde67ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:14 crc kubenswrapper[4858]: I1122 09:40:14.302678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c86c95b8-8h6xv" event={"ID":"ee404aa4-d838-4368-9e25-6648adde67ee","Type":"ContainerDied","Data":"f6a8cdf219586b4cf7e0346640250bb59e7afdf0fe9ec129978130c3bee06d73"} Nov 22 09:40:14 crc kubenswrapper[4858]: I1122 09:40:14.302988 4858 scope.go:117] "RemoveContainer" containerID="8ed3ae5cedd53bd3a69e7e010ea65e7a6fc66b139c069cae1957b6aaf00b873d" Nov 22 09:40:14 crc kubenswrapper[4858]: I1122 09:40:14.302766 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c86c95b8-8h6xv" Nov 22 09:40:14 crc kubenswrapper[4858]: I1122 09:40:14.348566 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c86c95b8-8h6xv"] Nov 22 09:40:14 crc kubenswrapper[4858]: I1122 09:40:14.357362 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69c86c95b8-8h6xv"] Nov 22 09:40:14 crc kubenswrapper[4858]: I1122 09:40:14.519400 4858 scope.go:117] "RemoveContainer" containerID="6a0b3388f3b07e344f0aa419e784922d216578af23d8b90ea1471324a5e1ccfa" Nov 22 09:40:15 crc kubenswrapper[4858]: I1122 09:40:15.545543 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" path="/var/lib/kubelet/pods/ee404aa4-d838-4368-9e25-6648adde67ee/volumes" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.001187 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod178ee462-fc5c-4fc1-bdbc-22251a60c6a1"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod178ee462-fc5c-4fc1-bdbc-22251a60c6a1] : Timed out while waiting for systemd to remove kubepods-besteffort-pod178ee462_fc5c_4fc1_bdbc_22251a60c6a1.slice" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.001247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod178ee462-fc5c-4fc1-bdbc-22251a60c6a1] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod178ee462-fc5c-4fc1-bdbc-22251a60c6a1] : Timed out while waiting for systemd to remove kubepods-besteffort-pod178ee462_fc5c_4fc1_bdbc_22251a60c6a1.slice" pod="openstack/ovsdbserver-sb-0" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.251992 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod81f0d7b5-53a2-4d57-8d3e-fce52b6fd098"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod81f0d7b5-53a2-4d57-8d3e-fce52b6fd098] : Timed out while waiting for systemd to remove kubepods-besteffort-pod81f0d7b5_53a2_4d57_8d3e_fce52b6fd098.slice" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.252348 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod81f0d7b5-53a2-4d57-8d3e-fce52b6fd098] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod81f0d7b5-53a2-4d57-8d3e-fce52b6fd098] : Timed out while waiting for systemd to remove kubepods-besteffort-pod81f0d7b5_53a2_4d57_8d3e_fce52b6fd098.slice" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.334627 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59478d75c9-xdf7j" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.334699 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.364632 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59478d75c9-xdf7j"] Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.372731 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59478d75c9-xdf7j"] Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.383578 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.390566 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542002 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ck5f9"] Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542439 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-metadata" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542459 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-metadata" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542474 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="extract-content" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542482 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="extract-content" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542500 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542507 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542526 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f559e642-5710-41ad-b508-a76cf28d62ca" containerName="heat-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542535 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f559e642-5710-41ad-b508-a76cf28d62ca" containerName="heat-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542549 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="proxy-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542558 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="proxy-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542572 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1354cd0c-52c3-4174-b012-21a2b5ea8324" containerName="keystone-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542580 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1354cd0c-52c3-4174-b012-21a2b5ea8324" containerName="keystone-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542589 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-server" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542597 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-server" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542612 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542618 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542634 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084a15f9-e534-46ad-b38a-17eeb1b6589e" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542642 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="084a15f9-e534-46ad-b38a-17eeb1b6589e" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542658 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="prometheus" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542667 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="prometheus" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542676 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542683 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542695 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542703 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542715 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542735 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542747 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerName="dnsmasq-dns" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542754 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerName="dnsmasq-dns" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542763 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="extract-utilities" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542771 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="extract-utilities" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542787 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="setup-container" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542794 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="setup-container" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542806 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542813 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542827 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf713f2-824f-4d23-bb3a-1b1f7ef99020" containerName="kube-state-metrics" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542838 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf713f2-824f-4d23-bb3a-1b1f7ef99020" containerName="kube-state-metrics" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542849 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerName="nova-cell0-conductor-conductor" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542856 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerName="nova-cell0-conductor-conductor" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542873 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-central-agent" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542880 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-central-agent" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542896 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542904 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542913 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542920 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542934 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c014721-aa5e-4b1e-93b7-36b6832df6c6" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542942 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c014721-aa5e-4b1e-93b7-36b6832df6c6" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542955 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="registry-server" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542963 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="registry-server" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.542976 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="galera" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.542992 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="galera" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543006 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543013 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543023 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543030 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543043 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543064 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-listener" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543072 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-listener" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543084 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938520b5-d4e9-489e-8f92-642c144d69bc" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543092 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="938520b5-d4e9-489e-8f92-642c144d69bc" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543103 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543111 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543125 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-notifier" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543134 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-notifier" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543142 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543149 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543160 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543167 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543183 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="ovsdbserver-sb" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543191 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="ovsdbserver-sb" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543204 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543211 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543219 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="thanos-sidecar" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543226 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="thanos-sidecar" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543241 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4271125-14af-4748-97ad-ed766b2d26b8" containerName="memcached" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543256 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4271125-14af-4748-97ad-ed766b2d26b8" containerName="memcached" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543268 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerName="init" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543275 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerName="init" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543290 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-evaluator" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543299 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-evaluator" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543309 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543331 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543347 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerName="nova-cell1-conductor-conductor" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543355 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerName="nova-cell1-conductor-conductor" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543367 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0f4278-ebde-458a-85b5-9f95824cee1a" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543375 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0f4278-ebde-458a-85b5-9f95824cee1a" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543384 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543392 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543403 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="mysql-bootstrap" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543411 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="mysql-bootstrap" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543423 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543440 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543454 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543462 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543474 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543481 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543497 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543505 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59060e41-09d2-4441-8563-5302fd77a52d" containerName="rabbitmq" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543526 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="59060e41-09d2-4441-8563-5302fd77a52d" containerName="rabbitmq" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543537 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543545 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543561 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419367a7-1838-4692-b6fc-f266985765d7" containerName="nova-scheduler-scheduler" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543569 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="419367a7-1838-4692-b6fc-f266985765d7" containerName="nova-scheduler-scheduler" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543584 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543592 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543604 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543611 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-api" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543618 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="probe" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543625 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="probe" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543640 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd68b47b-06e7-4e59-aad6-cae8c376573d" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543648 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd68b47b-06e7-4e59-aad6-cae8c376573d" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543659 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543666 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543676 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="cinder-scheduler" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543684 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="cinder-scheduler" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543695 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="alertmanager" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543702 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="alertmanager" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543716 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="init-config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543723 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="init-config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543732 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543740 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543754 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44b3c43-4aed-4726-a49e-693cd279bca6" containerName="heat-cfnapi" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543762 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44b3c43-4aed-4726-a49e-693cd279bca6" containerName="heat-cfnapi" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543792 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="ovsdbserver-nb" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543799 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="ovsdbserver-nb" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543810 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543817 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-log" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543831 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="mysql-bootstrap" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543839 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="mysql-bootstrap" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543853 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59060e41-09d2-4441-8563-5302fd77a52d" containerName="setup-container" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543862 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="59060e41-09d2-4441-8563-5302fd77a52d" containerName="setup-container" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543872 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543879 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543887 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="init-config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543895 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="init-config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543904 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-notification-agent" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543911 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-notification-agent" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543926 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543933 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543944 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="sg-core" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543951 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="sg-core" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543962 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="rabbitmq" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543969 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="rabbitmq" Nov 22 09:40:16 crc kubenswrapper[4858]: E1122 09:40:16.543983 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="galera" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.543991 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="galera" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544350 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544363 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-evaluator" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544375 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544388 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544400 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee404aa4-d838-4368-9e25-6648adde67ee" containerName="horizon-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544413 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44b3c43-4aed-4726-a49e-693cd279bca6" containerName="heat-cfnapi" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544426 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-central-agent" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544439 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="398c6958-f902-4b59-9afd-0275dea7251d" containerName="galera" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544451 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="probe" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544462 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544473 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="ovsdbserver-sb" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544484 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e9b1a0-f2e8-4537-86cb-7651a5f44fff" containerName="nova-cell1-conductor-conductor" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544495 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544510 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-metadata" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544520 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="alertmanager" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544532 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf713f2-824f-4d23-bb3a-1b1f7ef99020" containerName="kube-state-metrics" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544548 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0f4278-ebde-458a-85b5-9f95824cee1a" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544562 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="ceilometer-notification-agent" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544572 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544584 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544595 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-server" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544606 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544616 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c014721-aa5e-4b1e-93b7-36b6832df6c6" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544623 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="thanos-sidecar" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544630 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9f6a1a-f6db-4db1-a07e-62baedc8fc60" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544643 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="938520b5-d4e9-489e-8f92-642c144d69bc" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544657 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544669 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544682 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="419367a7-1838-4692-b6fc-f266985765d7" containerName="nova-scheduler-scheduler" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544690 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9926527b-80a8-4a26-bc82-053200dbb73f" containerName="proxy-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544699 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38ef80a-bbad-4072-a37b-1e355a943447" containerName="neutron-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544706 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd68b47b-06e7-4e59-aad6-cae8c376573d" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544721 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="sg-core" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544731 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544739 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1659016a-e2b7-4dbd-8ad1-56bef9995d64" containerName="glance-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544751 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20945ad-d582-4bb8-a485-c6dbb78207fe" containerName="prometheus" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544762 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf93c03-ace2-450f-9dff-6ea5e6fa72d8" containerName="proxy-httpd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544775 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544784 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-listener" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544794 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="ovsdbserver-nb" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544809 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f559e642-5710-41ad-b508-a76cf28d62ca" containerName="heat-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544823 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="084a15f9-e534-46ad-b38a-17eeb1b6589e" containerName="mariadb-account-delete" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544834 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53819e9-9206-49f4-a1a7-2d9459fcc7c7" containerName="cinder-scheduler" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544842 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e952720-9083-48e0-96d1-54f1cfacfbf9" containerName="nova-metadata-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544856 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda2acef-1ebf-4106-b75f-57d3c2a80758" containerName="barbican-worker" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544868 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0691c992-818e-46a2-9057-2f9548253076" containerName="config-reloader" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544881 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1354cd0c-52c3-4174-b012-21a2b5ea8324" containerName="keystone-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11074703-ddac-49f9-b53d-5ec6c721af7d" containerName="galera" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8384e15-b249-44a6-8d35-8a2066b3da7b" containerName="rabbitmq" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544915 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-notifier" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544929 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfb9d28-4ab9-4fb2-b490-f82a2ca905a4" containerName="ovn-northd" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544940 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da6e158-7f6d-434b-bd4a-9a902a5879d9" containerName="cinder-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544952 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="879cb25d-5d39-48df-ac21-505127e58fd1" containerName="barbican-keystone-listener-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544962 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4271125-14af-4748-97ad-ed766b2d26b8" containerName="memcached" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544971 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9999f0-5166-4fe0-9110-374b372ff6da" containerName="placement-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544984 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.544994 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3945c5e-f85e-4fa1-b48f-1ec3bbc20d70" containerName="glance-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545007 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30dfd03-0897-4211-b0d7-aabfd726e408" containerName="nova-cell0-conductor-conductor" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545016 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3d42a8-0810-462c-abd3-73b770f8fb03" containerName="nova-api-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545029 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="59060e41-09d2-4441-8563-5302fd77a52d" containerName="rabbitmq" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545042 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d944bd-93c5-4863-96df-f83a4ff1db9b" containerName="openstack-network-exporter" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545053 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba14a1f-1786-4d30-a0ab-12ffd6ef1966" containerName="registry-server" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545063 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" containerName="dnsmasq-dns" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545072 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="555e309c-8c41-4ac1-8eca-60e203f92e4e" containerName="aodh-api" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.545081 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b023fd-a47e-4fac-b75f-50e32cd8ed68" containerName="barbican-api-log" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.546482 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.549927 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ck5f9"] Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.576974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-catalog-content\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.577078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-utilities\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.577258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n699c\" (UniqueName: \"kubernetes.io/projected/9c109df0-4804-4777-8142-2169d7b485f7-kube-api-access-n699c\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.677994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n699c\" (UniqueName: \"kubernetes.io/projected/9c109df0-4804-4777-8142-2169d7b485f7-kube-api-access-n699c\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.678079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-catalog-content\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.678122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-utilities\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.678695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-utilities\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.678713 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-catalog-content\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.707533 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n699c\" (UniqueName: \"kubernetes.io/projected/9c109df0-4804-4777-8142-2169d7b485f7-kube-api-access-n699c\") pod \"certified-operators-ck5f9\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:16 crc kubenswrapper[4858]: I1122 09:40:16.904620 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:17 crc kubenswrapper[4858]: I1122 09:40:17.420421 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ck5f9"] Nov 22 09:40:17 crc kubenswrapper[4858]: W1122 09:40:17.429763 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c109df0_4804_4777_8142_2169d7b485f7.slice/crio-7e5ead79796081cdae0748607db9b3540f8048080a4798de209f898d7cd13bf6 WatchSource:0}: Error finding container 7e5ead79796081cdae0748607db9b3540f8048080a4798de209f898d7cd13bf6: Status 404 returned error can't find the container with id 7e5ead79796081cdae0748607db9b3540f8048080a4798de209f898d7cd13bf6 Nov 22 09:40:17 crc kubenswrapper[4858]: I1122 09:40:17.546433 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="178ee462-fc5c-4fc1-bdbc-22251a60c6a1" path="/var/lib/kubelet/pods/178ee462-fc5c-4fc1-bdbc-22251a60c6a1/volumes" Nov 22 09:40:17 crc kubenswrapper[4858]: I1122 09:40:17.547718 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f0d7b5-53a2-4d57-8d3e-fce52b6fd098" path="/var/lib/kubelet/pods/81f0d7b5-53a2-4d57-8d3e-fce52b6fd098/volumes" Nov 22 09:40:18 crc kubenswrapper[4858]: I1122 09:40:18.350186 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c109df0-4804-4777-8142-2169d7b485f7" containerID="f2f0bf21da4651a163a8d9b911536f42185e2a1c763e5d461bae36fc4ba2f4d0" exitCode=0 Nov 22 09:40:18 crc kubenswrapper[4858]: I1122 09:40:18.350237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerDied","Data":"f2f0bf21da4651a163a8d9b911536f42185e2a1c763e5d461bae36fc4ba2f4d0"} Nov 22 09:40:18 crc kubenswrapper[4858]: I1122 09:40:18.350267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerStarted","Data":"7e5ead79796081cdae0748607db9b3540f8048080a4798de209f898d7cd13bf6"} Nov 22 09:40:19 crc kubenswrapper[4858]: E1122 09:40:19.223695 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:40:19 crc kubenswrapper[4858]: E1122 09:40:19.223966 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts podName:15c7de97-b620-4e9b-8e17-27da546d6fb8 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:51.223951621 +0000 UTC m=+9013.065374617 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts") pod "novaapi04c7-account-delete-q782b" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8") : configmap "openstack-scripts" not found Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.364431 4858 generic.go:334] "Generic (PLEG): container finished" podID="64b43663-db69-4e42-a14e-85cc35b48dc3" containerID="7e2fdf55afbc977857bfd741ee23ebdf1ec7fef9d5cc6c0b8e22d103a1bd9b4a" exitCode=137 Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.364731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh2e22-account-delete-nqn9k" event={"ID":"64b43663-db69-4e42-a14e-85cc35b48dc3","Type":"ContainerDied","Data":"7e2fdf55afbc977857bfd741ee23ebdf1ec7fef9d5cc6c0b8e22d103a1bd9b4a"} Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.454487 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.634894 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmr4\" (UniqueName: \"kubernetes.io/projected/64b43663-db69-4e42-a14e-85cc35b48dc3-kube-api-access-dlmr4\") pod \"64b43663-db69-4e42-a14e-85cc35b48dc3\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.636404 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b43663-db69-4e42-a14e-85cc35b48dc3-operator-scripts\") pod \"64b43663-db69-4e42-a14e-85cc35b48dc3\" (UID: \"64b43663-db69-4e42-a14e-85cc35b48dc3\") " Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.638623 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64b43663-db69-4e42-a14e-85cc35b48dc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64b43663-db69-4e42-a14e-85cc35b48dc3" (UID: "64b43663-db69-4e42-a14e-85cc35b48dc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.644090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b43663-db69-4e42-a14e-85cc35b48dc3-kube-api-access-dlmr4" (OuterVolumeSpecName: "kube-api-access-dlmr4") pod "64b43663-db69-4e42-a14e-85cc35b48dc3" (UID: "64b43663-db69-4e42-a14e-85cc35b48dc3"). InnerVolumeSpecName "kube-api-access-dlmr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.737944 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmr4\" (UniqueName: \"kubernetes.io/projected/64b43663-db69-4e42-a14e-85cc35b48dc3-kube-api-access-dlmr4\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:19 crc kubenswrapper[4858]: I1122 09:40:19.737985 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b43663-db69-4e42-a14e-85cc35b48dc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.245654 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.246054 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts podName:9c43106d-cbb9-4b9e-93d3-acb28caa5fc6 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:52.24603344 +0000 UTC m=+9014.087456546 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts") pod "heataca4-account-delete-65j4m" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6") : configmap "openstack-scripts" not found Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.327304 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.329855 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.331481 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.331593 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.347045 4858 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 09:40:20 crc kubenswrapper[4858]: E1122 09:40:20.347699 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts podName:eb9be543-7566-4423-b4ed-5d9596cf21a4 nodeName:}" failed. No retries permitted until 2025-11-22 09:40:52.347644012 +0000 UTC m=+9014.189067028 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts") pod "novacell0b969-account-delete-2lntr" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4") : configmap "openstack-scripts" not found Nov 22 09:40:20 crc kubenswrapper[4858]: I1122 09:40:20.378040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh2e22-account-delete-nqn9k" event={"ID":"64b43663-db69-4e42-a14e-85cc35b48dc3","Type":"ContainerDied","Data":"1ae86203c3f1fb6500a30e8580f83b2be9b6bbad04cc76c0f981952f2add976e"} Nov 22 09:40:20 crc kubenswrapper[4858]: I1122 09:40:20.378117 4858 scope.go:117] "RemoveContainer" containerID="7e2fdf55afbc977857bfd741ee23ebdf1ec7fef9d5cc6c0b8e22d103a1bd9b4a" Nov 22 09:40:20 crc kubenswrapper[4858]: I1122 09:40:20.378669 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh2e22-account-delete-nqn9k" Nov 22 09:40:20 crc kubenswrapper[4858]: I1122 09:40:20.386091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerStarted","Data":"4ebd5dea4d50a0014a20dde343756b3d3de97823a5999b3243083a12f7e6acae"} Nov 22 09:40:20 crc kubenswrapper[4858]: I1122 09:40:20.447085 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh2e22-account-delete-nqn9k"] Nov 22 09:40:20 crc kubenswrapper[4858]: I1122 09:40:20.452495 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh2e22-account-delete-nqn9k"] Nov 22 09:40:21 crc kubenswrapper[4858]: I1122 09:40:21.555539 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b43663-db69-4e42-a14e-85cc35b48dc3" path="/var/lib/kubelet/pods/64b43663-db69-4e42-a14e-85cc35b48dc3/volumes" Nov 22 09:40:21 crc kubenswrapper[4858]: I1122 09:40:21.945971 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.075780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts\") pod \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.075848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx48r\" (UniqueName: \"kubernetes.io/projected/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-kube-api-access-cx48r\") pod \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\" (UID: \"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6\") " Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.077084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.083912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-kube-api-access-cx48r" (OuterVolumeSpecName: "kube-api-access-cx48r") pod "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" (UID: "9c43106d-cbb9-4b9e-93d3-acb28caa5fc6"). InnerVolumeSpecName "kube-api-access-cx48r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.178380 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx48r\" (UniqueName: \"kubernetes.io/projected/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-kube-api-access-cx48r\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.178404 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.262559 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.337999 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.381869 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46nnd\" (UniqueName: \"kubernetes.io/projected/15c7de97-b620-4e9b-8e17-27da546d6fb8-kube-api-access-46nnd\") pod \"15c7de97-b620-4e9b-8e17-27da546d6fb8\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.381920 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prbj6\" (UniqueName: \"kubernetes.io/projected/eb9be543-7566-4423-b4ed-5d9596cf21a4-kube-api-access-prbj6\") pod \"eb9be543-7566-4423-b4ed-5d9596cf21a4\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.382000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts\") pod \"15c7de97-b620-4e9b-8e17-27da546d6fb8\" (UID: \"15c7de97-b620-4e9b-8e17-27da546d6fb8\") " Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.382046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts\") pod \"eb9be543-7566-4423-b4ed-5d9596cf21a4\" (UID: \"eb9be543-7566-4423-b4ed-5d9596cf21a4\") " Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.382529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15c7de97-b620-4e9b-8e17-27da546d6fb8" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.382749 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb9be543-7566-4423-b4ed-5d9596cf21a4" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.384966 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9be543-7566-4423-b4ed-5d9596cf21a4-kube-api-access-prbj6" (OuterVolumeSpecName: "kube-api-access-prbj6") pod "eb9be543-7566-4423-b4ed-5d9596cf21a4" (UID: "eb9be543-7566-4423-b4ed-5d9596cf21a4"). InnerVolumeSpecName "kube-api-access-prbj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.385456 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c7de97-b620-4e9b-8e17-27da546d6fb8-kube-api-access-46nnd" (OuterVolumeSpecName: "kube-api-access-46nnd") pod "15c7de97-b620-4e9b-8e17-27da546d6fb8" (UID: "15c7de97-b620-4e9b-8e17-27da546d6fb8"). InnerVolumeSpecName "kube-api-access-46nnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.409162 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c109df0-4804-4777-8142-2169d7b485f7" containerID="4ebd5dea4d50a0014a20dde343756b3d3de97823a5999b3243083a12f7e6acae" exitCode=0 Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.409233 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerDied","Data":"4ebd5dea4d50a0014a20dde343756b3d3de97823a5999b3243083a12f7e6acae"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.414210 4858 generic.go:334] "Generic (PLEG): container finished" podID="eb9be543-7566-4423-b4ed-5d9596cf21a4" containerID="3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c" exitCode=137 Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.414285 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0b969-account-delete-2lntr" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.414299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0b969-account-delete-2lntr" event={"ID":"eb9be543-7566-4423-b4ed-5d9596cf21a4","Type":"ContainerDied","Data":"3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.414359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0b969-account-delete-2lntr" event={"ID":"eb9be543-7566-4423-b4ed-5d9596cf21a4","Type":"ContainerDied","Data":"f5302204afff924b438835c055a9015d89098bd53d75ecd91dc452276add1d9c"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.414382 4858 scope.go:117] "RemoveContainer" containerID="3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.418542 4858 generic.go:334] "Generic (PLEG): container finished" podID="15c7de97-b620-4e9b-8e17-27da546d6fb8" containerID="f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1" exitCode=137 Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.418630 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi04c7-account-delete-q782b" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.418634 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi04c7-account-delete-q782b" event={"ID":"15c7de97-b620-4e9b-8e17-27da546d6fb8","Type":"ContainerDied","Data":"f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.418762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi04c7-account-delete-q782b" event={"ID":"15c7de97-b620-4e9b-8e17-27da546d6fb8","Type":"ContainerDied","Data":"3e0354f6596301fcac89f59c44171064d318ff8985d02887d289086b64642e98"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.433919 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" containerID="071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b" exitCode=137 Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.433968 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heataca4-account-delete-65j4m" event={"ID":"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6","Type":"ContainerDied","Data":"071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.433976 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heataca4-account-delete-65j4m" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.434000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heataca4-account-delete-65j4m" event={"ID":"9c43106d-cbb9-4b9e-93d3-acb28caa5fc6","Type":"ContainerDied","Data":"660c340e4e0f621b51dea65cd5896ccccfe58542175983992935a581d5a73832"} Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.452514 4858 scope.go:117] "RemoveContainer" containerID="3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c" Nov 22 09:40:22 crc kubenswrapper[4858]: E1122 09:40:22.459733 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c\": container with ID starting with 3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c not found: ID does not exist" containerID="3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.459804 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c"} err="failed to get container status \"3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c\": rpc error: code = NotFound desc = could not find container \"3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c\": container with ID starting with 3038df8838c2e0e17faafd490448f9c28a12655615a19aa99b2d362c9cf7243c not found: ID does not exist" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.459835 4858 scope.go:117] "RemoveContainer" containerID="f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.470774 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi04c7-account-delete-q782b"] Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.480518 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi04c7-account-delete-q782b"] Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.483771 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c7de97-b620-4e9b-8e17-27da546d6fb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.483815 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9be543-7566-4423-b4ed-5d9596cf21a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.486180 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46nnd\" (UniqueName: \"kubernetes.io/projected/15c7de97-b620-4e9b-8e17-27da546d6fb8-kube-api-access-46nnd\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.486226 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prbj6\" (UniqueName: \"kubernetes.io/projected/eb9be543-7566-4423-b4ed-5d9596cf21a4-kube-api-access-prbj6\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.487970 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heataca4-account-delete-65j4m"] Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.488469 4858 scope.go:117] "RemoveContainer" containerID="f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1" Nov 22 09:40:22 crc kubenswrapper[4858]: E1122 09:40:22.489020 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1\": container with ID starting with f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1 not found: ID does not exist" containerID="f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.489044 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1"} err="failed to get container status \"f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1\": rpc error: code = NotFound desc = could not find container \"f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1\": container with ID starting with f2054d001e82dc0ea10dc9213fc16ad82140bf808ff26d676f37201250bf09d1 not found: ID does not exist" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.489062 4858 scope.go:117] "RemoveContainer" containerID="071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.492776 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heataca4-account-delete-65j4m"] Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.497368 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0b969-account-delete-2lntr"] Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.501599 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell0b969-account-delete-2lntr"] Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.505147 4858 scope.go:117] "RemoveContainer" containerID="071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b" Nov 22 09:40:22 crc kubenswrapper[4858]: E1122 09:40:22.505558 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b\": container with ID starting with 071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b not found: ID does not exist" containerID="071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b" Nov 22 09:40:22 crc kubenswrapper[4858]: I1122 09:40:22.505590 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b"} err="failed to get container status \"071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b\": rpc error: code = NotFound desc = could not find container \"071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b\": container with ID starting with 071afc9ee9159273dc79b9904ea0e8372ff21c40eb87fafd2cb2229e5b86405b not found: ID does not exist" Nov 22 09:40:23 crc kubenswrapper[4858]: I1122 09:40:23.452198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerStarted","Data":"8997c079b76ece97473531937c9fae0a127fdafdb3d096cf2c66697142e28703"} Nov 22 09:40:23 crc kubenswrapper[4858]: I1122 09:40:23.475394 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ck5f9" podStartSLOduration=2.97051511 podStartE2EDuration="7.475377448s" podCreationTimestamp="2025-11-22 09:40:16 +0000 UTC" firstStartedPulling="2025-11-22 09:40:18.352738909 +0000 UTC m=+8980.194161935" lastFinishedPulling="2025-11-22 09:40:22.857601227 +0000 UTC m=+8984.699024273" observedRunningTime="2025-11-22 09:40:23.470715918 +0000 UTC m=+8985.312138934" watchObservedRunningTime="2025-11-22 09:40:23.475377448 +0000 UTC m=+8985.316800474" Nov 22 09:40:23 crc kubenswrapper[4858]: I1122 09:40:23.547068 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c7de97-b620-4e9b-8e17-27da546d6fb8" path="/var/lib/kubelet/pods/15c7de97-b620-4e9b-8e17-27da546d6fb8/volumes" Nov 22 09:40:23 crc kubenswrapper[4858]: I1122 09:40:23.547659 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" path="/var/lib/kubelet/pods/9c43106d-cbb9-4b9e-93d3-acb28caa5fc6/volumes" Nov 22 09:40:23 crc kubenswrapper[4858]: I1122 09:40:23.548212 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9be543-7566-4423-b4ed-5d9596cf21a4" path="/var/lib/kubelet/pods/eb9be543-7566-4423-b4ed-5d9596cf21a4/volumes" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.344094 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.420368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl9zd\" (UniqueName: \"kubernetes.io/projected/1e5b4cdf-1c7e-47c4-8921-00df1e643887-kube-api-access-gl9zd\") pod \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.421233 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") pod \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\" (UID: \"1e5b4cdf-1c7e-47c4-8921-00df1e643887\") " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.427111 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5b4cdf-1c7e-47c4-8921-00df1e643887-kube-api-access-gl9zd" (OuterVolumeSpecName: "kube-api-access-gl9zd") pod "1e5b4cdf-1c7e-47c4-8921-00df1e643887" (UID: "1e5b4cdf-1c7e-47c4-8921-00df1e643887"). InnerVolumeSpecName "kube-api-access-gl9zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.478541 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1" (OuterVolumeSpecName: "mariadb-data") pod "1e5b4cdf-1c7e-47c4-8921-00df1e643887" (UID: "1e5b4cdf-1c7e-47c4-8921-00df1e643887"). InnerVolumeSpecName "pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.479529 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b14e62d-03f3-44cf-9b81-f5c0511865cd" containerID="748c7fd5b8d2394c9cb02c31b7296c97713c51013b8ab56c9ede3e3f67b3d1dd" exitCode=137 Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.479603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7b14e62d-03f3-44cf-9b81-f5c0511865cd","Type":"ContainerDied","Data":"748c7fd5b8d2394c9cb02c31b7296c97713c51013b8ab56c9ede3e3f67b3d1dd"} Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.481203 4858 generic.go:334] "Generic (PLEG): container finished" podID="1e5b4cdf-1c7e-47c4-8921-00df1e643887" containerID="5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4" exitCode=137 Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.481231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1e5b4cdf-1c7e-47c4-8921-00df1e643887","Type":"ContainerDied","Data":"5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4"} Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.481263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1e5b4cdf-1c7e-47c4-8921-00df1e643887","Type":"ContainerDied","Data":"cb5bbc881284fe0d354f5aa29de4977b7a3456be84eb03ccbded6bbce48dd679"} Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.481279 4858 scope.go:117] "RemoveContainer" containerID="5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.481282 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.519037 4858 scope.go:117] "RemoveContainer" containerID="5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.519394 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:40:24 crc kubenswrapper[4858]: E1122 09:40:24.519880 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4\": container with ID starting with 5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4 not found: ID does not exist" containerID="5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.519932 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4"} err="failed to get container status \"5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4\": rpc error: code = NotFound desc = could not find container \"5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4\": container with ID starting with 5f6858eefddda78fac4c358326945f22fceeae43b779710206c0e4056e9d94a4 not found: ID does not exist" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.523223 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") on node \"crc\" " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.523255 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl9zd\" (UniqueName: \"kubernetes.io/projected/1e5b4cdf-1c7e-47c4-8921-00df1e643887-kube-api-access-gl9zd\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.526638 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.547809 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.547940 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1") on node "crc" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.551968 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.624239 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7b14e62d-03f3-44cf-9b81-f5c0511865cd-ovn-data-cert\") pod \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.624419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghnqg\" (UniqueName: \"kubernetes.io/projected/7b14e62d-03f3-44cf-9b81-f5c0511865cd-kube-api-access-ghnqg\") pod \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.629478 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b14e62d-03f3-44cf-9b81-f5c0511865cd-kube-api-access-ghnqg" (OuterVolumeSpecName: "kube-api-access-ghnqg") pod "7b14e62d-03f3-44cf-9b81-f5c0511865cd" (UID: "7b14e62d-03f3-44cf-9b81-f5c0511865cd"). InnerVolumeSpecName "kube-api-access-ghnqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.630019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") pod \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\" (UID: \"7b14e62d-03f3-44cf-9b81-f5c0511865cd\") " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.630406 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghnqg\" (UniqueName: \"kubernetes.io/projected/7b14e62d-03f3-44cf-9b81-f5c0511865cd-kube-api-access-ghnqg\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.630429 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9b6f86de-d441-4ee2-821e-5b5b2c328fa1\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.631600 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b14e62d-03f3-44cf-9b81-f5c0511865cd-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "7b14e62d-03f3-44cf-9b81-f5c0511865cd" (UID: "7b14e62d-03f3-44cf-9b81-f5c0511865cd"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.638707 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57" (OuterVolumeSpecName: "ovn-data") pod "7b14e62d-03f3-44cf-9b81-f5c0511865cd" (UID: "7b14e62d-03f3-44cf-9b81-f5c0511865cd"). InnerVolumeSpecName "pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.731746 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") on node \"crc\" " Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.731780 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/7b14e62d-03f3-44cf-9b81-f5c0511865cd-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.750105 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.750404 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57") on node "crc" Nov 22 09:40:24 crc kubenswrapper[4858]: I1122 09:40:24.833201 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bd6af92c-ea88-408a-9ad3-ef344c813e57\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:25 crc kubenswrapper[4858]: I1122 09:40:25.498148 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"7b14e62d-03f3-44cf-9b81-f5c0511865cd","Type":"ContainerDied","Data":"af2dc15b08c986658fceeb53e71d1e59e12e1725093b35dd9f183bdd1af449a8"} Nov 22 09:40:25 crc kubenswrapper[4858]: I1122 09:40:25.498500 4858 scope.go:117] "RemoveContainer" containerID="748c7fd5b8d2394c9cb02c31b7296c97713c51013b8ab56c9ede3e3f67b3d1dd" Nov 22 09:40:25 crc kubenswrapper[4858]: I1122 09:40:25.498395 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 09:40:25 crc kubenswrapper[4858]: I1122 09:40:25.551984 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5b4cdf-1c7e-47c4-8921-00df1e643887" path="/var/lib/kubelet/pods/1e5b4cdf-1c7e-47c4-8921-00df1e643887/volumes" Nov 22 09:40:25 crc kubenswrapper[4858]: I1122 09:40:25.552606 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:40:25 crc kubenswrapper[4858]: I1122 09:40:25.552667 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:40:26 crc kubenswrapper[4858]: I1122 09:40:26.905478 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:26 crc kubenswrapper[4858]: I1122 09:40:26.906829 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:26 crc kubenswrapper[4858]: I1122 09:40:26.957117 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:27 crc kubenswrapper[4858]: I1122 09:40:27.555106 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b14e62d-03f3-44cf-9b81-f5c0511865cd" path="/var/lib/kubelet/pods/7b14e62d-03f3-44cf-9b81-f5c0511865cd/volumes" Nov 22 09:40:28 crc kubenswrapper[4858]: I1122 09:40:28.631635 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:28 crc kubenswrapper[4858]: I1122 09:40:28.684053 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ck5f9"] Nov 22 09:40:30 crc kubenswrapper[4858]: E1122 09:40:30.326058 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:30 crc kubenswrapper[4858]: E1122 09:40:30.327478 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:30 crc kubenswrapper[4858]: E1122 09:40:30.328727 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:30 crc kubenswrapper[4858]: E1122 09:40:30.328765 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:40:30 crc kubenswrapper[4858]: I1122 09:40:30.558615 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ck5f9" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="registry-server" containerID="cri-o://8997c079b76ece97473531937c9fae0a127fdafdb3d096cf2c66697142e28703" gracePeriod=2 Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.571916 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c109df0-4804-4777-8142-2169d7b485f7" containerID="8997c079b76ece97473531937c9fae0a127fdafdb3d096cf2c66697142e28703" exitCode=0 Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.572002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerDied","Data":"8997c079b76ece97473531937c9fae0a127fdafdb3d096cf2c66697142e28703"} Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.841371 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.851538 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n699c\" (UniqueName: \"kubernetes.io/projected/9c109df0-4804-4777-8142-2169d7b485f7-kube-api-access-n699c\") pod \"9c109df0-4804-4777-8142-2169d7b485f7\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.851670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-catalog-content\") pod \"9c109df0-4804-4777-8142-2169d7b485f7\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.851770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-utilities\") pod \"9c109df0-4804-4777-8142-2169d7b485f7\" (UID: \"9c109df0-4804-4777-8142-2169d7b485f7\") " Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.853123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-utilities" (OuterVolumeSpecName: "utilities") pod "9c109df0-4804-4777-8142-2169d7b485f7" (UID: "9c109df0-4804-4777-8142-2169d7b485f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.857931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c109df0-4804-4777-8142-2169d7b485f7-kube-api-access-n699c" (OuterVolumeSpecName: "kube-api-access-n699c") pod "9c109df0-4804-4777-8142-2169d7b485f7" (UID: "9c109df0-4804-4777-8142-2169d7b485f7"). InnerVolumeSpecName "kube-api-access-n699c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.919256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c109df0-4804-4777-8142-2169d7b485f7" (UID: "9c109df0-4804-4777-8142-2169d7b485f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.953053 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.953085 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c109df0-4804-4777-8142-2169d7b485f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:31 crc kubenswrapper[4858]: I1122 09:40:31.953096 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n699c\" (UniqueName: \"kubernetes.io/projected/9c109df0-4804-4777-8142-2169d7b485f7-kube-api-access-n699c\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.588924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck5f9" event={"ID":"9c109df0-4804-4777-8142-2169d7b485f7","Type":"ContainerDied","Data":"7e5ead79796081cdae0748607db9b3540f8048080a4798de209f898d7cd13bf6"} Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.589027 4858 scope.go:117] "RemoveContainer" containerID="8997c079b76ece97473531937c9fae0a127fdafdb3d096cf2c66697142e28703" Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.589055 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck5f9" Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.635046 4858 scope.go:117] "RemoveContainer" containerID="4ebd5dea4d50a0014a20dde343756b3d3de97823a5999b3243083a12f7e6acae" Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.661454 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ck5f9"] Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.676312 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ck5f9"] Nov 22 09:40:32 crc kubenswrapper[4858]: I1122 09:40:32.677422 4858 scope.go:117] "RemoveContainer" containerID="f2f0bf21da4651a163a8d9b911536f42185e2a1c763e5d461bae36fc4ba2f4d0" Nov 22 09:40:33 crc kubenswrapper[4858]: I1122 09:40:33.548142 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c109df0-4804-4777-8142-2169d7b485f7" path="/var/lib/kubelet/pods/9c109df0-4804-4777-8142-2169d7b485f7/volumes" Nov 22 09:40:40 crc kubenswrapper[4858]: E1122 09:40:40.326524 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:40 crc kubenswrapper[4858]: E1122 09:40:40.329253 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:40 crc kubenswrapper[4858]: E1122 09:40:40.331060 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 09:40:40 crc kubenswrapper[4858]: E1122 09:40:40.331109 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7f4fc69954-bcngv" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:40:42 crc kubenswrapper[4858]: I1122 09:40:42.896454 4858 scope.go:117] "RemoveContainer" containerID="0aa85a37e97e72c8efa2b73911f4eef75c838f7fb6915cad5a5299b8caecf2b7" Nov 22 09:40:42 crc kubenswrapper[4858]: I1122 09:40:42.924836 4858 scope.go:117] "RemoveContainer" containerID="065e51e4b82bfd09ef58eeccb1d741e51d5167ffe8e2bd644d87495b643cbfb5" Nov 22 09:40:42 crc kubenswrapper[4858]: I1122 09:40:42.951630 4858 scope.go:117] "RemoveContainer" containerID="038e302d3860418f90b2b7e4958cf548fa4093b4d69d684bd726e1fdd1a9fbf2" Nov 22 09:40:42 crc kubenswrapper[4858]: I1122 09:40:42.978693 4858 scope.go:117] "RemoveContainer" containerID="761bd583458b9228a46e2048c9579370d0d1ec7104acbbde74a8d9d0c1f15d55" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.013365 4858 scope.go:117] "RemoveContainer" containerID="82d7d106549f4cab1563ffa6d0ff10088ff06828f89a22c3f44a74a78f1a2c15" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.038513 4858 scope.go:117] "RemoveContainer" containerID="94bfef2893e1c9f03641f20fd271ae7cbde6ab65a624a8c20ea43f622935c4d2" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.062918 4858 scope.go:117] "RemoveContainer" containerID="48243763ff91a842163928192fc2ea246f302325792033ccd2427519d16f31b0" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.083080 4858 scope.go:117] "RemoveContainer" containerID="7847846fa51b831ff0dc10903739c4f5fccbb778f7df1c7d441f41c30798dea3" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.128844 4858 scope.go:117] "RemoveContainer" containerID="3538fb3f59251c148ca9ef352cd6933fb18a4d40fa1eeb03004322fc80fe564d" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.154911 4858 scope.go:117] "RemoveContainer" containerID="d837e865797bb898f874c63d6f5c7eaed4e0e01cf3976888d27222c06e7cc246" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.200047 4858 scope.go:117] "RemoveContainer" containerID="cd325d1c2af1c603c1fe84df51a0ecd6724e440095165ec34cc4c4d521a1494f" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.226273 4858 scope.go:117] "RemoveContainer" containerID="452a9ab7b1b4a1974cdad0d365d5a8a6fa77348bb175f5268abb56ed7e86bf62" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.254026 4858 scope.go:117] "RemoveContainer" containerID="e61d56cee7da5d021ef44da5aac6e5c36bb8477f6f981ef9461bdbf5be01bb27" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.302555 4858 scope.go:117] "RemoveContainer" containerID="7a1b9aa9bf7fdcfe3b6dd842717d88716652a749a754b92b43ad5226f5e6ec33" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.325554 4858 scope.go:117] "RemoveContainer" containerID="cddb36142f710de01a2a2604912a1de51c98b16778d69d3541cb2e91fd0be10f" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.345032 4858 scope.go:117] "RemoveContainer" containerID="8070c89d3808b68f0b98fb9cbd32312e22d937be61d9757f60eb633a06522feb" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.368546 4858 scope.go:117] "RemoveContainer" containerID="9f5e64397fcfbf30b8e57de5cd79bbaa5aa1cfb6dc41d738673c9552face9f4f" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.394957 4858 scope.go:117] "RemoveContainer" containerID="0d9c14545905e3cda2f017bb37cf1a67c2243ee303a9eec348eaebba94004931" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.423295 4858 scope.go:117] "RemoveContainer" containerID="c02bcf924af8e4c2d6ca90bd8a608ea834531a49916fa28f1e8aadbb6103b5f6" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.444926 4858 scope.go:117] "RemoveContainer" containerID="428bc38b18119c4305d118eb828b9d35bf76f7f0732bf893cb1b34f626cfecdb" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.470902 4858 scope.go:117] "RemoveContainer" containerID="9329c10d2543dce5392c0af5a7d61ebfe67fba02c6cbc2e7b19da53775192377" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.503995 4858 scope.go:117] "RemoveContainer" containerID="9198b94ea2533b167d04afd698dca553ec68666e838a06eb774281ed98603364" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.534581 4858 scope.go:117] "RemoveContainer" containerID="eeba1d571add7d50c07f586327e32795ac943ddbbdd3bc346a8173c54be363a8" Nov 22 09:40:43 crc kubenswrapper[4858]: I1122 09:40:43.560424 4858 scope.go:117] "RemoveContainer" containerID="60a7697719dcfe5cae1572c1e36b77399083669158f5e42f6d05ab4268425eff" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.066408 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.165562 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-combined-ca-bundle\") pod \"a36e4c2a-3eca-4150-867c-937eb02c77f1\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.165654 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qvtj\" (UniqueName: \"kubernetes.io/projected/a36e4c2a-3eca-4150-867c-937eb02c77f1-kube-api-access-7qvtj\") pod \"a36e4c2a-3eca-4150-867c-937eb02c77f1\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.165763 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data\") pod \"a36e4c2a-3eca-4150-867c-937eb02c77f1\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.165801 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data-custom\") pod \"a36e4c2a-3eca-4150-867c-937eb02c77f1\" (UID: \"a36e4c2a-3eca-4150-867c-937eb02c77f1\") " Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.171313 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a36e4c2a-3eca-4150-867c-937eb02c77f1" (UID: "a36e4c2a-3eca-4150-867c-937eb02c77f1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.171476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36e4c2a-3eca-4150-867c-937eb02c77f1-kube-api-access-7qvtj" (OuterVolumeSpecName: "kube-api-access-7qvtj") pod "a36e4c2a-3eca-4150-867c-937eb02c77f1" (UID: "a36e4c2a-3eca-4150-867c-937eb02c77f1"). InnerVolumeSpecName "kube-api-access-7qvtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.195309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a36e4c2a-3eca-4150-867c-937eb02c77f1" (UID: "a36e4c2a-3eca-4150-867c-937eb02c77f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.208690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data" (OuterVolumeSpecName: "config-data") pod "a36e4c2a-3eca-4150-867c-937eb02c77f1" (UID: "a36e4c2a-3eca-4150-867c-937eb02c77f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.266560 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.266592 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qvtj\" (UniqueName: \"kubernetes.io/projected/a36e4c2a-3eca-4150-867c-937eb02c77f1-kube-api-access-7qvtj\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.266605 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.266614 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4c2a-3eca-4150-867c-937eb02c77f1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.792960 4858 generic.go:334] "Generic (PLEG): container finished" podID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" exitCode=137 Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.793027 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7f4fc69954-bcngv" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.793051 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f4fc69954-bcngv" event={"ID":"a36e4c2a-3eca-4150-867c-937eb02c77f1","Type":"ContainerDied","Data":"907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9"} Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.793624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7f4fc69954-bcngv" event={"ID":"a36e4c2a-3eca-4150-867c-937eb02c77f1","Type":"ContainerDied","Data":"1e2729a246c7211df409b15b2837ab43bef49cb3a98d5fd6f2d82d99985778f8"} Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.793747 4858 scope.go:117] "RemoveContainer" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.826146 4858 scope.go:117] "RemoveContainer" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" Nov 22 09:40:44 crc kubenswrapper[4858]: E1122 09:40:44.827225 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9\": container with ID starting with 907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9 not found: ID does not exist" containerID="907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.827280 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9"} err="failed to get container status \"907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9\": rpc error: code = NotFound desc = could not find container \"907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9\": container with ID starting with 907a159693c3f3bc440ea41ba8e46ec6d6b21434c5a4d7c57645b54419bdfaf9 not found: ID does not exist" Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.843934 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7f4fc69954-bcngv"] Nov 22 09:40:44 crc kubenswrapper[4858]: I1122 09:40:44.854067 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7f4fc69954-bcngv"] Nov 22 09:40:45 crc kubenswrapper[4858]: I1122 09:40:45.548859 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" path="/var/lib/kubelet/pods/a36e4c2a-3eca-4150-867c-937eb02c77f1/volumes" Nov 22 09:41:44 crc kubenswrapper[4858]: I1122 09:41:44.348959 4858 scope.go:117] "RemoveContainer" containerID="84077731cbf14b4a7a9b6c9a8f86172f3b454069f7e80249ba2ab4d94ebd58fb" Nov 22 09:41:44 crc kubenswrapper[4858]: I1122 09:41:44.385204 4858 scope.go:117] "RemoveContainer" containerID="918209d7d13a78e17d2265b8b6e9586b5d6360719a05e32d9d26a420c7ab48d1" Nov 22 09:41:44 crc kubenswrapper[4858]: I1122 09:41:44.417971 4858 scope.go:117] "RemoveContainer" containerID="4f38792396e3d0b3fe3482c717f089c4843b54559f52cb8be1e2ed5bed2a403e" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.113023 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dhdj8/must-gather-kbv6f"] Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114034 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="extract-content" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114051 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="extract-content" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114065 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c7de97-b620-4e9b-8e17-27da546d6fb8" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114072 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c7de97-b620-4e9b-8e17-27da546d6fb8" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114091 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="extract-utilities" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114104 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="extract-utilities" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114115 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114123 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114134 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b43663-db69-4e42-a14e-85cc35b48dc3" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114141 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b43663-db69-4e42-a14e-85cc35b48dc3" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114161 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5b4cdf-1c7e-47c4-8921-00df1e643887" containerName="adoption" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114169 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5b4cdf-1c7e-47c4-8921-00df1e643887" containerName="adoption" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114184 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9be543-7566-4423-b4ed-5d9596cf21a4" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114192 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9be543-7566-4423-b4ed-5d9596cf21a4" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114205 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114213 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114231 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b14e62d-03f3-44cf-9b81-f5c0511865cd" containerName="adoption" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114237 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b14e62d-03f3-44cf-9b81-f5c0511865cd" containerName="adoption" Nov 22 09:41:49 crc kubenswrapper[4858]: E1122 09:41:49.114247 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="registry-server" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114254 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="registry-server" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114449 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b14e62d-03f3-44cf-9b81-f5c0511865cd" containerName="adoption" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114469 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b43663-db69-4e42-a14e-85cc35b48dc3" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114484 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36e4c2a-3eca-4150-867c-937eb02c77f1" containerName="heat-engine" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114498 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c7de97-b620-4e9b-8e17-27da546d6fb8" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114510 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9be543-7566-4423-b4ed-5d9596cf21a4" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114526 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5b4cdf-1c7e-47c4-8921-00df1e643887" containerName="adoption" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114537 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c43106d-cbb9-4b9e-93d3-acb28caa5fc6" containerName="mariadb-account-delete" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.114552 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c109df0-4804-4777-8142-2169d7b485f7" containerName="registry-server" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.115633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.118605 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dhdj8/must-gather-kbv6f"] Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.119878 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dhdj8"/"openshift-service-ca.crt" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.119916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dhdj8"/"default-dockercfg-fr5pd" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.120049 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dhdj8"/"kube-root-ca.crt" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.144717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjgk\" (UniqueName: \"kubernetes.io/projected/bd53d614-35d2-47d3-a208-71084da6b55c-kube-api-access-8bjgk\") pod \"must-gather-kbv6f\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.144835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd53d614-35d2-47d3-a208-71084da6b55c-must-gather-output\") pod \"must-gather-kbv6f\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.246120 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd53d614-35d2-47d3-a208-71084da6b55c-must-gather-output\") pod \"must-gather-kbv6f\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.246197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bjgk\" (UniqueName: \"kubernetes.io/projected/bd53d614-35d2-47d3-a208-71084da6b55c-kube-api-access-8bjgk\") pod \"must-gather-kbv6f\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.246675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd53d614-35d2-47d3-a208-71084da6b55c-must-gather-output\") pod \"must-gather-kbv6f\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.265758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bjgk\" (UniqueName: \"kubernetes.io/projected/bd53d614-35d2-47d3-a208-71084da6b55c-kube-api-access-8bjgk\") pod \"must-gather-kbv6f\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.432061 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.847162 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dhdj8/must-gather-kbv6f"] Nov 22 09:41:49 crc kubenswrapper[4858]: I1122 09:41:49.855895 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:41:50 crc kubenswrapper[4858]: I1122 09:41:50.490049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" event={"ID":"bd53d614-35d2-47d3-a208-71084da6b55c","Type":"ContainerStarted","Data":"b558e52bd98995cba27e10f8bd368c49289918dccb7354dfe729f17e6600ffd7"} Nov 22 09:42:08 crc kubenswrapper[4858]: E1122 09:42:08.768100 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Nov 22 09:42:08 crc kubenswrapper[4858]: E1122 09:42:08.768901 4858 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 22 09:42:08 crc kubenswrapper[4858]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c Nov 22 09:42:08 crc kubenswrapper[4858]: echo "[disk usage checker] Started" Nov 22 09:42:08 crc kubenswrapper[4858]: target_dir="/must-gather" Nov 22 09:42:08 crc kubenswrapper[4858]: usage_percentage_limit="30" Nov 22 09:42:08 crc kubenswrapper[4858]: while true; do Nov 22 09:42:08 crc kubenswrapper[4858]: disk_usage=$(du -s "$target_dir" | awk '{print $1}') Nov 22 09:42:08 crc kubenswrapper[4858]: disk_space=$(df -P "$target_dir" | awk 'NR==2 {print $2}') Nov 22 09:42:08 crc kubenswrapper[4858]: usage_percentage=$(( (disk_usage * 100) / disk_space )) Nov 22 09:42:08 crc kubenswrapper[4858]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Nov 22 09:42:08 crc kubenswrapper[4858]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Nov 22 09:42:08 crc kubenswrapper[4858]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Nov 22 09:42:08 crc kubenswrapper[4858]: pkill --signal SIGKILL gather Nov 22 09:42:08 crc kubenswrapper[4858]: exit 1 Nov 22 09:42:08 crc kubenswrapper[4858]: fi Nov 22 09:42:08 crc kubenswrapper[4858]: sleep 5 Nov 22 09:42:08 crc kubenswrapper[4858]: done & ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all SOS_DECOMPRESS=0 gather; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bjgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-kbv6f_openshift-must-gather-dhdj8(bd53d614-35d2-47d3-a208-71084da6b55c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 22 09:42:08 crc kubenswrapper[4858]: > logger="UnhandledError" Nov 22 09:42:08 crc kubenswrapper[4858]: E1122 09:42:08.772410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" podUID="bd53d614-35d2-47d3-a208-71084da6b55c" Nov 22 09:42:09 crc kubenswrapper[4858]: E1122 09:42:09.663266 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" podUID="bd53d614-35d2-47d3-a208-71084da6b55c" Nov 22 09:42:15 crc kubenswrapper[4858]: I1122 09:42:15.311718 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:42:15 crc kubenswrapper[4858]: I1122 09:42:15.312226 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:42:23 crc kubenswrapper[4858]: I1122 09:42:23.787770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" event={"ID":"bd53d614-35d2-47d3-a208-71084da6b55c","Type":"ContainerStarted","Data":"ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc"} Nov 22 09:42:23 crc kubenswrapper[4858]: I1122 09:42:23.788285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" event={"ID":"bd53d614-35d2-47d3-a208-71084da6b55c","Type":"ContainerStarted","Data":"fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00"} Nov 22 09:42:23 crc kubenswrapper[4858]: I1122 09:42:23.807851 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" podStartSLOduration=2.123568833 podStartE2EDuration="34.807833226s" podCreationTimestamp="2025-11-22 09:41:49 +0000 UTC" firstStartedPulling="2025-11-22 09:41:49.855702217 +0000 UTC m=+9071.697125223" lastFinishedPulling="2025-11-22 09:42:22.53996661 +0000 UTC m=+9104.381389616" observedRunningTime="2025-11-22 09:42:23.807121293 +0000 UTC m=+9105.648544299" watchObservedRunningTime="2025-11-22 09:42:23.807833226 +0000 UTC m=+9105.649256232" Nov 22 09:42:24 crc kubenswrapper[4858]: I1122 09:42:24.547549 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dhdj8/must-gather-kbv6f"] Nov 22 09:42:24 crc kubenswrapper[4858]: I1122 09:42:24.558604 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dhdj8/must-gather-kbv6f"] Nov 22 09:42:24 crc kubenswrapper[4858]: I1122 09:42:24.795706 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" podUID="bd53d614-35d2-47d3-a208-71084da6b55c" containerName="gather" containerID="cri-o://fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00" gracePeriod=2 Nov 22 09:42:24 crc kubenswrapper[4858]: I1122 09:42:24.796127 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" podUID="bd53d614-35d2-47d3-a208-71084da6b55c" containerName="copy" containerID="cri-o://ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc" gracePeriod=2 Nov 22 09:42:25 crc kubenswrapper[4858]: I1122 09:42:25.806466 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dhdj8_must-gather-kbv6f_bd53d614-35d2-47d3-a208-71084da6b55c/copy/0.log" Nov 22 09:42:25 crc kubenswrapper[4858]: I1122 09:42:25.807193 4858 generic.go:334] "Generic (PLEG): container finished" podID="bd53d614-35d2-47d3-a208-71084da6b55c" containerID="ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc" exitCode=143 Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.566229 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dhdj8_must-gather-kbv6f_bd53d614-35d2-47d3-a208-71084da6b55c/copy/0.log" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.568040 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dhdj8_must-gather-kbv6f_bd53d614-35d2-47d3-a208-71084da6b55c/gather/0.log" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.568106 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.712543 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd53d614-35d2-47d3-a208-71084da6b55c-must-gather-output\") pod \"bd53d614-35d2-47d3-a208-71084da6b55c\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.712691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bjgk\" (UniqueName: \"kubernetes.io/projected/bd53d614-35d2-47d3-a208-71084da6b55c-kube-api-access-8bjgk\") pod \"bd53d614-35d2-47d3-a208-71084da6b55c\" (UID: \"bd53d614-35d2-47d3-a208-71084da6b55c\") " Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.714001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd53d614-35d2-47d3-a208-71084da6b55c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bd53d614-35d2-47d3-a208-71084da6b55c" (UID: "bd53d614-35d2-47d3-a208-71084da6b55c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.714223 4858 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd53d614-35d2-47d3-a208-71084da6b55c-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.718649 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd53d614-35d2-47d3-a208-71084da6b55c-kube-api-access-8bjgk" (OuterVolumeSpecName: "kube-api-access-8bjgk") pod "bd53d614-35d2-47d3-a208-71084da6b55c" (UID: "bd53d614-35d2-47d3-a208-71084da6b55c"). InnerVolumeSpecName "kube-api-access-8bjgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.815375 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bjgk\" (UniqueName: \"kubernetes.io/projected/bd53d614-35d2-47d3-a208-71084da6b55c-kube-api-access-8bjgk\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.836767 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dhdj8_must-gather-kbv6f_bd53d614-35d2-47d3-a208-71084da6b55c/copy/0.log" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.837317 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dhdj8_must-gather-kbv6f_bd53d614-35d2-47d3-a208-71084da6b55c/gather/0.log" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.837587 4858 generic.go:334] "Generic (PLEG): container finished" podID="bd53d614-35d2-47d3-a208-71084da6b55c" containerID="fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00" exitCode=137 Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.837638 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dhdj8/must-gather-kbv6f" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.837665 4858 scope.go:117] "RemoveContainer" containerID="ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.879892 4858 scope.go:117] "RemoveContainer" containerID="fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.922498 4858 scope.go:117] "RemoveContainer" containerID="ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc" Nov 22 09:42:28 crc kubenswrapper[4858]: E1122 09:42:28.923093 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc\": container with ID starting with ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc not found: ID does not exist" containerID="ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.923146 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc"} err="failed to get container status \"ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc\": rpc error: code = NotFound desc = could not find container \"ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc\": container with ID starting with ccaacae5cc832c86e08f386a09a763e6d42e58b5e75124d4e9e9fc136374d0dc not found: ID does not exist" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.923180 4858 scope.go:117] "RemoveContainer" containerID="fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00" Nov 22 09:42:28 crc kubenswrapper[4858]: E1122 09:42:28.923694 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00\": container with ID starting with fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00 not found: ID does not exist" containerID="fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00" Nov 22 09:42:28 crc kubenswrapper[4858]: I1122 09:42:28.923738 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00"} err="failed to get container status \"fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00\": rpc error: code = NotFound desc = could not find container \"fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00\": container with ID starting with fccd73ec4a9ddd7e819d1f6552a7f15216284f149383cc544940998aca1bdd00 not found: ID does not exist" Nov 22 09:42:29 crc kubenswrapper[4858]: I1122 09:42:29.546160 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd53d614-35d2-47d3-a208-71084da6b55c" path="/var/lib/kubelet/pods/bd53d614-35d2-47d3-a208-71084da6b55c/volumes" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.521521 4858 scope.go:117] "RemoveContainer" containerID="521ad7f882964279edf7d5590bacbc8eeace3ac39a83cda6a10df740dc827350" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.551973 4858 scope.go:117] "RemoveContainer" containerID="dd0d4a38c6628e6cd6833ecc0f37a9b78f79b92faf3d87c5ffac41a4d3c25c15" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.573354 4858 scope.go:117] "RemoveContainer" containerID="d2f8ef66b6a8e77f76210f4a45fe3aca5169cb0000916d8304fd25265cec38d1" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.589972 4858 scope.go:117] "RemoveContainer" containerID="6710e42193427ea5e698492be80b243408ec95b75ef41570bcefa42cabb6bd45" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.608599 4858 scope.go:117] "RemoveContainer" containerID="0eda1c6ea33c6848a009c3dc95830f5d0706331c8f0491fb87c141c53a0cbe4c" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.654968 4858 scope.go:117] "RemoveContainer" containerID="03f0619fca527648354a137f8ed45941fe3b9e6ed1682842bd4cb2b6eb5ae9f6" Nov 22 09:42:44 crc kubenswrapper[4858]: I1122 09:42:44.673301 4858 scope.go:117] "RemoveContainer" containerID="6ad6597a2759cc61aa76fd00e3a64b4ee32679b91be7e663c37976b726f4357e" Nov 22 09:42:45 crc kubenswrapper[4858]: I1122 09:42:45.312090 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:42:45 crc kubenswrapper[4858]: I1122 09:42:45.312580 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:43:15 crc kubenswrapper[4858]: I1122 09:43:15.312396 4858 patch_prober.go:28] interesting pod/machine-config-daemon-qkh9t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:43:15 crc kubenswrapper[4858]: I1122 09:43:15.312971 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:43:15 crc kubenswrapper[4858]: I1122 09:43:15.313019 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" Nov 22 09:43:15 crc kubenswrapper[4858]: I1122 09:43:15.313598 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e49c73afb70240d2bfb3f0d91318ec2304f8e894976f6aea5e6c68307db741f"} pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:43:15 crc kubenswrapper[4858]: I1122 09:43:15.313649 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" podUID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerName="machine-config-daemon" containerID="cri-o://0e49c73afb70240d2bfb3f0d91318ec2304f8e894976f6aea5e6c68307db741f" gracePeriod=600 Nov 22 09:43:16 crc kubenswrapper[4858]: I1122 09:43:16.294954 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ac3f217-ad73-4e89-b703-b42a3c6c9ed4" containerID="0e49c73afb70240d2bfb3f0d91318ec2304f8e894976f6aea5e6c68307db741f" exitCode=0 Nov 22 09:43:16 crc kubenswrapper[4858]: I1122 09:43:16.295179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerDied","Data":"0e49c73afb70240d2bfb3f0d91318ec2304f8e894976f6aea5e6c68307db741f"} Nov 22 09:43:16 crc kubenswrapper[4858]: I1122 09:43:16.295517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qkh9t" event={"ID":"4ac3f217-ad73-4e89-b703-b42a3c6c9ed4","Type":"ContainerStarted","Data":"17565e2e2b97bc46c10b6a46d91260fdc43dffb0b9d21e0254ee9d56b6ff765d"} Nov 22 09:43:16 crc kubenswrapper[4858]: I1122 09:43:16.295539 4858 scope.go:117] "RemoveContainer" containerID="01fd68ea41555792785fb6e916487eab52de958fcf12c6b766a957d14f305d82"